Trace Finite Element Methods for Surface Vector-Laplace Equations
aa r X i v : . [ m a t h . NA ] A p r TRACE FINITE ELEMENT METHODS FOR SURFACEVECTOR-LAPLACE EQUATIONS
THOMAS JANKUHN ∗ AND
ARNOLD REUSKEN † Abstract.
In this paper we analyze a class of trace finite element methods (TraceFEM) forthe discretization of vector-Laplace equations. A key issue in the finite element discretization ofsuch problems is the treatment of the constraint that the unknown vector field must be tangentialto the surface (“tangent condition”). We study three different natural techniques for treating thetangent condition, namely a consistent penalty method, a simpler inconsistent penalty method and aLagrange multiplier method. A main goal of the paper is to present an analysis that reveals importantproperties of these three different techniques for treating the tangent constraint. A detailed erroranalysis is presented that takes the approximation of both the geometry of the surface and thesolution of the partial differential equation into account. Error bounds in the energy norm arederived that show how the discretization error depends on relevant parameters such as the degreeof the polynomials used for the approximation of the solution, the degree of the polynomials usedfor the approximation of the level set function that characterizes the surface, the penalty parameterand the degree of the polynomials used for the approximation of Lagrange multiplier.
Key words. vector-Laplace, trace finite element method.
1. Introduction.
In recent years there has been a strongly growing interestin the field of modeling and numerical simulation of surface fluids, cf. the papers[2, 23, 12, 16, 17, 21, 11], in which Navier-Stokes type PDEs for (evolving) surfaceswith fluidic properties are proposed. Concerning error analysis of numerical methodsfor surface (Navier-)Stokes equations there are only very few results available. In [9]an error analysis for a finite element discretization method for surface Darcy equationsis presented. First error analysis results for a finite element discretization method ofsurface Stokes equations are given in [18]. In that paper a P P ∗ Institut f¨ur Geometrie und Praktische Mathematik, RWTH-Aachen University, D-52056 Aachen,Germany ([email protected]) † Institut f¨ur Geometrie und Praktische Mathematik, RWTH-Aachen University, D-52056 Aachen,Germany ([email protected]). 1 pirit of the surface finite element method (SFEM) for scalar surface PDEs, intro-duced by Dziuk and Elliott [4], is studied. The tangent condition is weakly enforcedby a penalization term. Optimal discretization error estimates are derived that takethe approximation of both the geometry of the surface and the solution of the partialdifferential equation into account. In [6] a different finite element technique, namelythe trace finite element method (TraceFEM) is studied. This TraceFEM has beenthoroughly analyzed for scalar surface PDEs, cf. the overview paper [19]. In orderto satisfy the tangent constraint for vector-Laplace problems, a Lagrange multiplierapproach is proposed and analyzed in [6]. Optimal error estimates are derived, which,however, do not take the errors due to the approximation of the geometry of thesurface into account.In this paper we consider the same vector-Laplace problem as in [6], which is sim-ilar to the one in [10]. We study the TraceFEM and three different natural techniquesfor treating the tangent condition: • A consistent penalty method, which is the same as the one analyzed (for theSFEM) in [10]. • An in consistent penalty method as introduced in [11]. This method is sim-pler as the above-mentioned consistent one, because an approximation of theWeingarten map is avoided. • A Lagrange multiplier method as in [6].For higher order approximation we use the parametric version of TraceFEM, which,for scalar surface PDEs, is analyzed in [5].
The main goal of the paper is to presentan analysis that reveals important properties of these three different techniques fortreating the tangent constraint . The topics studied in this paper relate to the onestreated in [10, 6] as follows. Different from [10], we study the TraceFEM (insteadof SFEM) and we analyze and compare three different techniques for handling thetangent condition. In [6] only the Lagrange multiplier method is treated and errorsdue to geometry approximation are neglected; in this paper we take geometry errorsinto account and besides the Lagrange multiplier method we also analyze two penaltymethods.Since we use TraceFEM, it is necessary to include some stabilization to dampinstabilities caused by “small cuts”. For this we use the normal derivative volumestabilization, known from the literature [5]. We derive error estimates that takethe approximation of both the geometry of the surface and the solution of the partialdifferential equation into account. The main results of this paper are the discretizationerror bounds, in the energy norm, given in section 5.6. These results reveal how theerrors depend on relevant parameters k , k g , k p , η , k l . Here k denotes the degreeof the polynomials used for the approximation of the solution, k g the degree of thepolynomials used for the approximation of the level set function that characterizesthe surface, k p ≥ k g the order of accuracy of the normal vector approximation used inthe penalization term (in both penalty methods), η the penalty parameter and k l thedegree of the polynomials used for the approximation of the Lagrange parameter (inthe third method). These error bounds lead to several interesting conclusions. Forexample, for both penalty methods it is necessary to take k p ≥ k +1 in order to obtainoptimal error bounds. For the SFEM and the consistent penalty method such a result s also derived in [10]. For the consistent penalty method one obtains an optimal ordererror bound of order ∼ h k if one takes k p = k + 1, k g = k (i.e. isoparametric spaces), η ∼ h − . Such an optimal result does not hold for the (simpler) in consistent penaltymethod. Optimal balancing of terms leads to k p = k + 1, k g = k , η ∼ h − ( k +1) andan error bound of order ∼ h ( k +1) for the inconsistent penalty method. This boundis optimal (only) for the important case of linear finite elements, i.e., k = 1. Hence,in that case this simpler method (which avoids approximation of the Weingartenmap) may be more attractive than the consistent penalty method. For the Lagrangemultiplier method we do not obtain optimal error bounds for the isoparametric case k = k g . For k g = k + 1 we obtain optimal bounds both for k l = k and k l = k − k ≥ L -error analysis. This will be addressed infuture work. Based on the results obtained for the vector-Laplace problem we planto analyze these methods applied to surface Stokes equations. This a topic of currentresearch.The remainder of the paper is organized as follows. In section 2 we introducethe variational formulation of the surface vector-Laplace problem that we consider,and give three related formulations (two of penalty type and one based on a Lagrangemultiplier) in which the tangent constraint is treated differently. In section 3 we collectproperties of a parametric finite element space known from the literature. Based onthis space and the three different variational formulations we define correspondingTraceFEM discrete problems in section 4. An error analysis of these methods ispresented in section 5. The structure of this analysis is along the usual lines. We firstderive discrete stability results and based on these formulate Strang Lemmas, in whichthe energy norm of the discretization error is bounded by a sum of an approximationerror and a consistency error. Bounds of the approximation error are easy to derive,based on results known from the literature. For proving satisfactory bounds for theconsistency term we need a longer and tedious analysis. Finally, in section 6 wepresent results of numerical experiments.
2. Continuous problem.
We assume that Ω ⊂ R is a polygonal domain whichcontains a connected compact smooth hypersurface Γ without boundary. A tubular eighborhood of Γ is denoted by U δ := (cid:8) x ∈ R | | d ( x ) | < δ (cid:9) , with δ > d the signed distance function to Γ, which we take negative in theinterior of Γ. The surface Γ is the zero level of a smooth level set function φ : U δ → R ,i.e. Γ = { x ∈ Ω | φ ( x ) = 0 } . This level set function is not necessarily close to adistance function but has the usual properties of a level set function: k∇ φ ( x ) k ∼ , k∇ φ ( x ) k ≤ c for all x ∈ U δ . We assume that the level set function φ is sufficiently smooth. On U δ we define n ( x ) = ∇ d ( x ), the outward pointing unit normal on Γ, H ( x ) = ∇ d ( x ), the Weingarten map, P = P ( x ) := I − n ( x ) n ( x ) T , the orthogonal projection onto the tangential plane, p ( x ) = x − d ( x ) n ( x ), the closest point projection. We assume δ > x = p ( x ) + d ( x ) n ( x )is unique for all x ∈ U δ . The constant normal extension for vector functions v : Γ → R is defined as v e ( x ) := v ( p ( x )), x ∈ U δ . The extension for scalar functions isdefined similarly. Note that on Γ we have ∇ w e = ∇ ( w ◦ p ) = ∇ w e P , with ∇ w :=( ∇ w , ∇ w , ∇ w ) T ∈ R × for smooth vector functions w : U δ → R . For a scalarfunction g : U δ → R and a vector function v : U δ → R the covariant derivatives aredefined by ∇ Γ g ( x ) = P ( x ) ∇ g ( x ) , x ∈ Γ , ∇ Γ v ( x ) = P ( x ) ∇ v ( x ) P ( x ) , x ∈ Γ . On Γ we consider the surface stress tensor (see [7]) given by E s ( u ) := 12 (cid:0) ∇ Γ u + ∇ T Γ u (cid:1) , with ∇ T Γ u := ( ∇ Γ u ) T . To simplify the notation we write E = E s . The surface di-vergence operator for vector-valued functions u : Γ → R and tensor-valued functions A : Γ → R × are defined asdiv Γ u := tr( ∇ Γ u ) , div Γ A := (cid:0) div Γ ( e T A ) , div Γ ( e T A ) , div Γ ( e T A ) (cid:1) T , with e i the i th basis vector in R . For a given force vector f ∈ L (Γ) , with f · n = 0,we consider the following elliptic vector Laplace problem : determine u : Γ → R with u · n = 0 and − P div Γ ( E ( u )) + u = f on Γ . (2.1)We added the zero order term on the left-hand side to avoid technical details relatedto the kernel of the strain tensor E (the so-called Killing vector fields). The surfaceSobolev space of weakly differentiable vector valued functions is denoted by V := H (Γ) , with k u k H (Γ) := Z Γ k u ( s ) k + k∇ u e ( s ) k ds. (2.2) ote that k∇ u e k = k ( ∇ u e ) T k and on Γ we have( ∇ u e ) T = P (cid:0) ∇ u e , ∇ u e , ∇ u e (cid:1) = X i =1 ∇ Γ u i e Ti . (2.3)Hence, the norm in (2.2) is a natural extension to vector valued functions of the usualscalar H (Γ)-norm. The corresponding space of tangential vector field is denoted by V T := { u ∈ V | u · n = 0 } . A vector u ∈ V can be decomposed into a tangential and a normal part. We use thenotation: u = Pu + ( u · n ) n = u T + u N n . For u , v ∈ V we introduce the bilinear form a ( u , v ) := Z Γ E ( u ) : E ( v ) ds + Z Γ u · v ds. For a given f ∈ L (Γ) with f · n = 0 we consider the following weak formulation of(2.1): determine u = u T ∈ V T such that a ( u T , v T ) = ( f , v T ) L (Γ) for all v T ∈ V T . (C)The bilinear form a ( · , · ) is continuous on V T . The ellipticity of a ( · , · ) on V T followsfrom the following surface Korn inequality, which is derived in [11]. Lemma 2.1.
Assume Γ is C smooth. There exists a constant c K > such that k u k L (Γ) + k E ( u ) k L (Γ) ≥ c K k u k H (Γ) for all u ∈ V T . Hence, the weak formulation (C) is a well-posed problem. The unique solution isdenoted by u ∗ = u ∗ T .The weak formulation (C) is not very suitable for a finite element discretization,because we would need vector finite element functions that are tangential to Γ. Ob-vious alternatives are obtained by allowing general (not necessarily tangential) vectorfunctions u and to treat the constraint u · n = 0 by either a penalty approach or aLagrange multiplier. For vector Laplace problems these were considered in the recentpapers [10, 6]. Below we introduce two different penalty formulations and a Lagrangemultiplier formulation. These formulations are the basis for (higher order) Galerkinfinite element methods that are defined in section 4. The remainder of the paper isthen devoted to an error analysis of these methods.Define V ∗ := (cid:8) u ∈ L (Γ) | u T ∈ V T , u N ∈ L (Γ) (cid:9) , k u k V ∗ := k u T k H (Γ) + k u N k L (Γ) . Using the identity (for u ∈ V ) E ( u ) = E ( u T ) + u N H (2.4) e introduce, with some abuse of notation, the bilinear form a ( u , v ) := Z Γ ( E ( u T ) + u N H ) : ( E ( v T ) + v N H ) ds + Z Γ u · v ds, u , v ∈ V ∗ . (2.5)This bilinear form is well-defined and continuous on V ∗ . We also define the penaltybilinear form k ( u , v ) := η Z Γ ( u · n ) ( v · n ) ds, with η > f ∈ L (Γ) with f · n = 0 determine u ∈ V ∗ ,such that a ( u , v ) + k ( u , v ) = ( f , v ) L (Γ) for all v ∈ V ∗ . (P1)One can easily check that for η sufficiently large we have an ellipticity property: thereexists a constant c > a ( u , u ) + k ( u , u ) ≥ c k u k V ∗ for all u ∈ V ∗ . (2.6)Furthermore, a ( · , · ) + k ( · , · ) is continuous on V ∗ . Hence, for η sufficiently large theproblem (P1) is well-posed. The formulation, however, is inconsistent . Lemma 2.2.
Take η sufficiently large such that (2.6) holds. For the uniquesolution u of (P1) the following holds: k u T − u ∗ T k H (Γ) + k u N k L (Γ) ≤ cη − k f k L (Γ) . (2.7) Proof . There exists a constant ˜ c > k u k V ∗ ≤ ˜ c k f k L (Γ) . Testing problem (P1) with v = u N n we obtain a ( u , u N n ) + η k u N k L (Γ) = 0. Using the Cauchy-Schwarz inequality we get η k u N k L (Γ) ≤ C k u k V ∗ k u N k L (Γ) ≤ ˜ C k f k L (Γ) k u N k L (Γ) , i.e., k u N k L (Γ) ≤ ˜ Cη − k f k L (Γ) . (2.8)Testing problem (P1) and problem (C) with v T = u T − u ∗ T results in a ( u ∗ T , v T ) − a ( u , v T ) = 0, and thus a ( v T , v T ) = − a ( u N n , v T ). Using Korn’s inequality (Lemma 2.1)and continuity of a ( · , · ) we get k u T − u ∗ T k H (Γ) ≤ ˆ C k u N k L (Γ) k u T − u ∗ T k H (Γ) . Combining this with (2.8) proves the result (2.7).To obtain a consistent variant of this formulation we introduce the bilinear form a T ( · , · ) in which only the tangential components of the arguments play a role: a T ( u , v ) := a ( Pu , Pv ) = a ( u T , v T ) . (2.9) he corresponding penalty formulation is: for a given f ∈ L (Γ) with f · n = 0determine u ∈ V ∗ , such that a T ( u , v ) + k ( u , v ) = ( f , v ) L (Γ) for all v ∈ V ∗ . (P2)This formulation is indeed consistent: Lemma 2.3.
Problem (P2) is well posed. For the unique solution ˜ u = ˜ u T ∈ V ∗ of this problem we have ˜ u T = u ∗ T .Proof . Define A ( u , v ) = a T ( u , v ) + k ( u , v ). We have | A ( u , v ) | ≤ c k u k V ∗ k v k V ∗ for all u , v ∈ V ∗ , and using Lemma 2.1 it follows that there is a constant c > k u k V ∗ = k u T k H (Γ) + k u N k L (Γ) ≤ c A ( u , u ) for all u ∈ V ∗ . Therefore problem (P2) is well posed. For the unique solution u ∗ T ∈ V T of problem(C) we have A ( u ∗ T , v ) = a T ( u ∗ T , v ) + k ( u ∗ T , v ) = a ( u ∗ T , v T ) = ( f , v ) L (Γ) for all v ∈ V ∗ . Hence, u ∗ T solves problem (P2).The third formulation that we consider uses a Lagrange multiplier to ensure thatthe solution is tangential to Γ. We use the bilinear form a ( · , · ) as in (2.5) and b ( u , µ ) :=( u · n , µ ) L (Γ) , u ∈ V ∗ , µ ∈ L (Γ). For a given g ∈ L (Γ) , which is not necessarilytangential, we introduce the following saddle point problem: determine ( u , λ ) ∈ V ∗ × L (Γ) such that a ( u , v ) + b ( v , µ ) = ( g , v ) L (Γ) for all v ∈ V ∗ ,b ( u , µ ) = 0 for all µ ∈ L (Γ) . (L)Well-posedness of this saddle point problem is derived in the following theorem(see [6]). Theorem 2.4.
The problem (L) is well-posed. Its unique solution (ˆ u , λ ) ∈ V ∗ × L (Γ) has the following properties: . ˆ u · n = 0 , (2.10)2 . ˆ u T = u ∗ T , where u ∗ T is the unique solution of (C) with f := g T = Pg , (2.11)3 . λ = g N − tr (cid:0) E (ˆ u T ) H (cid:1) , for g = g T + g N n . (2.12)Summarizing, for the given vector Laplace problem (C) we have two alternative con-sistent formulations, namely (P2) (penalty approach) and (L) (Lagrange multiplier),and one inconsistent formulation (P1) (penalty approach). In the following sectionswe present a detailed analysis of finite element methods based on these different for-mulations.
3. Parametric finite element space.
For the discretization of the differentvariational problems (P1), (P2) and (L) we use a parametric trace finite elementapproach as in [5]. In this section we define the finite element space used in thismethod and summarize certain properties, known from the literature, that we needin the error analysis of the finite element methods. et {T h } h> be a family of shape regular tetrahedral triangulations of Ω. Forsimplicity, in the analysis of the method, we assume {T h } h> to be quasi-uniform. By V kh we denote the standard finite element space of continuous piecewise polynomialsof degree k . The nodal interpolation operator in V kh is denoted by I k . As inputfor the parametric mapping we need an approximation of φ . We consider geome-try approximations whose order of approximation may differ from the order of thepolynomials used in the finite element. In other words, the spaces introduced beloware not necessarily iso parametric. Let k g be the geometry approximation order, i.e.,the construction of the geometry approximation will be based on a level set functionapproximation φ h ∈ V k g h that satisfies the error estimatemax T ∈T h | φ h − φ | W l, ∞ ( T ∩ U δ ) . h k g +1 − l , ≤ l ≤ k g + 1 . Here, | · | W l, ∞ ( T ∩ U δ ) denotes the usual semi-norm on the Sobolev space W l, ∞ ( T ∩ U δ )and the constant used in . depends on φ but is independent of h . The zero levelset of the finite element function φ h implicitly characterizes an approximation of theinterface, which, however, is hard to compute for k g ≥
2. With the piecewise linear nodal interpolation of φ h , which is denoted by ˆ φ h = I φ h , we define the low ordergeometry approximation: Γ lin := { x ∈ Ω | ˆ φ h ( x ) = 0 } . The tetrahedra T ∈ T h that have a nonzero intersection with Γ lin are collected inthe set denoted by T Γ h . The domain formed by all tetrahedra in T Γ h is denoted byΩ Γ h := { x ∈ T | T ∈ T Γ h } . Let Θ k g h ∈ ( V k g h ) Γ h be the mesh transformation of order k g as defined in [5], cf. Remark 3.1. Remark 3.1.
We outline the key idea of the mesh transformation Θ k g h . For adetailed description we refer to [5], [14] and [15]. There exists a unique ˜ d : Ω Γ h → R defined as follows: ˜ d ( x ) is the in absolute value smallest number such that φ ( x + ˜ d ( x ) ∇ φ ( x )) = ˆ φ h ( x ) for x ∈ Ω Γ h . Based on ˜ d we define the mappingΨ( x ) := x + ˜ d ( x ) ∇ φ ( x ) , x ∈ Ω Γ h , which has the property Ψ(Γ lin ) = Γ. To avoid comptations with φ (which even maynot be available) we use a similar construction with φ replaced by its (finite element)approximation φ h . The resulting mapping Ψ h is not necessarily a finite elementfunction. The mesh transformation Θ k g h is obtained by projection of Ψ h into thefinite element space ( V k g h ) Γ h .The approximation of Γ is defined asΓ k g h := Θ k g h (Γ lin ) = n x | ˆ φ h ((Θ k g h ) − ( x )) = 0 o . We denote the transformed cut mesh domain by Ω ΓΘ := Θ k g h (Ω Γ h ). We assume that h is small enough such that Ω ΓΘ ⊂ U δ holds. We apply to V kh the transformation Θ k g h resulting in the parametric spaces V k,k g h, Θ := n v h ◦ (Θ k g h ) − | v h ∈ ( V kh ) Ω Γ h o , V k,k g h, Θ := ( V k,k g h, Θ ) . ote that k g denotes the degree of the polynomials used in the parametric mappingΘ k g h , and k the degree of the polynomials used in the finite element space. To simplifythe notation we delete the superscript k g and write V kh, Θ = V k,k g h, Θ , V kh, Θ = V k,k g h, Θ ,Θ h = Θ k g h and Γ h = Γ k g h . Here and further in the paper we write x . y to state thatthere exists a constant c >
0, which is independent of the mesh parameter h and theposition of Γ and Γ h in the background mesh, such that the inequality x ≤ cy holds.Similarly for x & y , and x ∼ y means that both x . y and x & y hold.We recall some known approximation results from the literature [5]. The para-metric interpolation I k Θ : C (Ω ΓΘ ) → V kh, Θ is defined by ( I k Θ v ) ◦ Θ h = I k ( v ◦ Θ h ). Wehave the following optimal interpolation error bound for 0 ≤ l ≤ k + 1: k v − I k Θ v k H l (Θ h ( T )) . h k +1 − l k v k H k +1 (Θ h ( T )) for all v ∈ H k +1 (Θ h ( T )) , T ∈ T h . (3.1)We also need the following trace estimate ([8]): k v k L (Γ T ) . h − k v k L (Θ h ( T )) + h k∇ v k L (Θ h ( T )) for v ∈ H (Θ h ( T )) , (3.2)with Γ T := Γ h ∩ Θ h ( T ). The Sobolev norms on Ω ΓΘ of the normal extension u e canbe estimated by the corresponding norms on Γ ([20]): k D µ u e k L (Ω ΓΘ ) . h k u k H m (Γ) for all u ∈ H m (Γ) , | µ | ≤ m. (3.3) Lemma 3.1.
For the space V kh, Θ we have the approximation error estimate min v h ∈ V kh, Θ (cid:0) k v e − v h k L (Γ h ) + h k∇ ( v e − v h ) k L (Γ h ) (cid:1) ≤ k v e − I k Θ v e k L (Γ h ) + h k∇ ( v e − I k Θ v e ) k L (Γ h ) . h k +1 k v k H k +1 (Γ) for all v ∈ H k +1 (Γ) .Proof . The proof uses standard arguments, based on (3.1), (3.2) and (3.1), cf. [5].The following lemma, taken from [5], gives an approximation error for the easy tocompute normal approximation n h , which is used in the methods introduced below. Lemma 3.2.
For x ∈ T ∈ T Γ h define n lin = n lin ( T ) := ∇ ˆ φ h ( x ) k∇ ˆ φ h ( x ) k = ∇ ˆ φ h | T k∇ ˆ φ h | T k , n h (Θ( x )) := D Θ h ( x ) − T n lin k D Θ h ( x ) − T n lin k . Let n Γ h ( x ) , x ∈ Γ h a.e., be the unit normal on Γ h (in the direction of φ h > ). Thefollowing holds: k n h − n k L ∞ (Ω ΓΘ ) . h k g , k n Γ h − n k L ∞ (Γ h ) . h k g . Similar to the extension of a function u defined on Γ to u e defined on U δ we definethe lifting u l of a function u defined on Γ h by ( u l ( p ( x )) = u ( x ) for x ∈ Γ h ,u l ( x ) = u l ( p ( x )) for x ∈ U δ . norm on H (Γ h ) is defined using the component-wise lifting by k u k H (Γ h ) := Z Γ h k u ( s ) k + k∇ u l ( s ) P h ( s ) k ds with P h = I − n h n Th . In (2.2) the term with ∇ u e corresponds to tangential gradientsof all components, cf. (2.3). The lifting used in the definition of the H (Γ h )-normis constant along the normal to Γ (not Γ h ). Therefore, to eliminate the part of the(componentwise) gradient which is normal to Γ h one uses the projection P h . We alsointroduce the following spaces V reg,h := (cid:8) v ∈ H (Ω ΓΘ ) | tr | Γ h v ∈ H (Γ h ) (cid:9) ⊃ V kh, Θ , V reg,h := (cid:0) V reg,h (cid:1) .
4. Parametric trace finite element methods.
In this section we introducethree parametric trace finite element methods. These are obtained by applying aGalerkin method (modulo a geometry error due to Γ h ≈ Γ) to the three formulations(P1), (P2) and (L) with the parametric finite element space V kh, Θ .We introduce further notation. In particular, discrete variants of the bilinearforms a ( · , · ), a T ( · , · ) and the penalty bilinear form k ( · , · ) introduced above. Since weuse a trace FEM, we need a stabilization that eliminates instabilities caused by thesmall cuts. For this we use the so-called “normal derivative volume stabilization” [5]( s h ( · , · ) and ˜ s h ( · , · ) below): ∇ Γ h u ( x ) := P h ( x ) ∇ u ( x ) P h ( x ) , x ∈ Γ h ,E h ( u ) := 12 (cid:0) ∇ Γ h u + ∇ T Γ h u (cid:1) , E T,h ( u ) := E h ( u ) − u N H h ,a h ( u , v ) := Z Γ h E h ( u ) : E h ( v ) ds h + Z Γ h u h · v h ds h ,a T,h ( u , v ) := Z Γ h E T,h ( u ) : E T,h ( v ) ds h + Z Γ h P h u h · P h v h ds h ,k h ( u , v ) := η Z Γ h ( u · ˜ n h )( v · ˜ n h ) ds h , s h ( u , v ) := ρ Z Ω ΓΘ ( ∇ un h ) · ( ∇ vn h ) dx,b h ( u , µ ) := ( u · n h , µ ) L (Γ h ) + ˜ s h ( u , µ ) , ˜ s h ( u , µ ) := ˜ ρ Z Ω ΓΘ ( n Th ∇ un h )( n h · ∇ µ ) dx. All these bilinear forms are well-defined for u , v ∈ V reg,h , µ ∈ V reg,h . The normalvector ˜ n h , used in the penalty term k h ( · , · ), and the curvature tensor H h are approx-imations of the exact normal and the exact Weingarten mapping, respectively. Thechoice of the stabilization parameters ρ , ˜ ρ is discussed below. Remark 4.1.
We use E T,h ( u ) := E h ( u ) − u N H h instead of E T,h ( u ) = E h ( P h u )because the latter requires (tangential) differentiation of P h , which has certain dis-advantages. The reason that we introduce yet another normal approximation ˜ n h isthe following. In the analysis below we will see that in order to achieve optimal orderestimates we need the normal ˜ n h used in the penalty term to be an approximation ofat least one order higher than the normal approximation n h . An approximation H h of the Weingarten map can be easily obtained, e.g., by taking H h = ∇ ( I k g Θ ( n h )). The tabilization with s h ( · , · ) used in the variational penalty formulations below guaran-tees that the stiffness matrix has a spectral condition number ∼ h − , independent ofhow the interface cuts the outer triangulation.To quantify the error in the approximations ˜ n h ≈ n , H h ≈ H , we introduce onefurther order parameter k p (besides k and k g ) and assume: k n − ˜ n h k L ∞ (Γ h ) . h k p , k p ≥ k g , (4.1) k H − H h k L ∞ (Γ h ) . h k g − . (4.2)We now introduce discrete versions of the formulations (P1), (P2) and (L). For thesewe need a suitable extension of the data f to Γ h , which is denoted by f h . Discrete inconsistent penalty formulation.
This problem is as follows: determine u h ∈ V kh, Θ such that for all v h ∈ V kh, Θ A P h ( u h , v h ) := a h ( u h , v h ) + s h ( u h , v h ) + k h ( u h , v h ) = ( f h , v h ) L (Γ h ) . (P1h) Discrete consistent penalty formulation.
This problem is as follows: determine u h ∈ V kh, Θ such that for all v h ∈ V kh, Θ A P h ( u h , v h ) := a T,h ( u h , v h ) + s h ( u h , v h ) + k h ( u h , v h ) = ( f h , v h ) L (Γ h ) . (P2h) Discrete Lagrange multiplier formulation.
This problem is as follows: determine( u h , λ h ) ∈ V kh, Θ × V k l h, Θ such that A Lh ( u h , v h ) + b h ( v h , λ h ) = ( f h , v h ) L (Γ) for all v h ∈ V kh, Θ b h ( u h , µ h ) = 0 for all µ h ∈ V k l h, Θ (Lh)with A Lh ( u , v ) := a h ( u , v ) + s h ( u , v ). Remark 4.2.
Problem (Lh) uses a Lagrange multiplier approach to enforce thetangential condition weakly. This formulation is consistent without using additionaltangential projections in a h ( · , · ) and avoids the approximation of the Weingarten map H . An obvious drawback of this formulation is that the resulting linear systems canbe significantly larger than the ones in the penalty formulation. In addition to thestabilization s h ( · , · ) we use a ”normal derivative volume” stabilization for the Lagrangemultiplier term as well. Different from s h ( · , · ) this stabilization ˜ s h ( · , · ) is essential forthe well-posedness of this formulation, cf. section 5.1.
5. Error analysis of the parametric TraceFEM.
In this section we presentan error analysis of the TraceFEMs (P1h), (P2h) and (Lh). We first address thechoice of the stabilization parameters ρ , ˜ ρ . From the analysis in [5] it is known thatfor optimal error bounds one must restrict ρ to the range h . ρ . h − . A moredetailed analysis (that we do not present here) has shown that there are no significantgains if one chooses for ˜ ρ a different scaling w.r.t h as for ρ . Therefore, to simplifythe presentation, in the remainder we restrict the stabilization parameters to h . ρ = ˜ ρ . h − . (5.1) .1. Well-posedness of discretizations. We start with some basic results con-cerning the bilinear forms. We use the following natural norms k u k A Pih := A P i h ( u , u ) , i = 1 , , k u k A Lh := A Lh ( u , u ) , k µ k M := k µ k L (Γ h ) + ρ k n h · ∇ µ k L (Ω ΓΘ ) . Before we analyze continuity and ellipticity of the bilinear forms we recall a lemmawhich shows that for finite element functions the L -norm in the neighborhood Ω ΓΘ can be controlled by the L -norm on Γ h and the L -norm of the normal derivative onΩ ΓΘ . Lemma 5.1.
For all k ∈ N , k ≥ , the following inequality holds: k v h k L (Ω ΓΘ ) . h k v h k L (Γ h ) + h k n h · ∇ v h k L (Ω ΓΘ ) for all v h ∈ V kh, Θ . Proof . In [5, Lemma 7.8] a proof of this result for the isoparametric case (i.e. k = k g ) is given. This proof also applies to the case k = k g .We formulate a few corollaries that are useful in the remainder. Using the traceinequality (3.2) and a standard finite element inverse inequality one obtains k v h k L (Ω ΓΘ ) ∼ h k v h k L (Γ h ) + h k n h · ∇ v h k L (Ω ΓΘ ) for all v h ∈ V kh, Θ , (5.2) k v h k L (Ω ΓΘ ) ∼ h k v h k L (Γ h ) + h k∇ v h n h k L (Ω ΓΘ ) for all v h ∈ V kh, Θ . (5.3)The result in (5.3) is obtained by componentwise application of (5.2). A further directconsequence of Lemma 5.1 is (we use (5.1)): k v h k L (Ω ΓΘ ) . h k v h k M for all v h ∈ V kh, Θ . (5.4)Using (3.2) and (5.2) we also obtain the inverse inequality k∇ v h k L (Γ h ) . h − k v h k L (Γ h ) + h − k n h ·∇ v h k L (Ω ΓΘ ) . h − k v h k M , v h ∈ V kh, Θ , (5.5)and the vector analogon k∇ v h k L (Γ h ) . h − k v h k L (Γ h ) + h − k∇ v h n h k L (Ω ΓΘ ) , v h ∈ V kh, Θ . (5.6) Lemma 5.2.
The following inequalities hold: A P i h ( u , v ) ≤ k u k A Pih k v k A Pih for all u , v ∈ V reg,h , i = 1 , .A Lh ( u , v ) ≤ k u k A Lh k v k A Lh for all u , v ∈ V reg,h ,b h ( u , µ ) ≤ k u k A Lh k µ k M for all u ∈ V reg,h , µ ∈ V reg,h ,A P i h ( u h , u h ) & h − k u h k L (Ω ΓΘ ) for all u h ∈ V kh, Θ , i = 1 , ,A Lh ( u h , u h ) & h − k u h k L (Ω ΓΘ ) for all u h ∈ V kh, Θ . roof . The first three estimates follow directly from the Cauchy-Schwarz inequal-ity. To show the other two we use (5.3): A P i h ( u h , u h ) ≥ k u h k L (Γ h ) + ρ k∇ u h n h k L (Ω ΓΘ ) & h − k u h k L (Ω ΓΘ ) A Lh ( u h , u h ) ≥ k u h k L (Γ h ) + ρ k∇ u h n h k L (Ω ΓΘ ) & h − k u h k L (Ω ΓΘ ) . From Lemma 5.2 it follows that the discrete penalty problems (P1h) and (P2h)have unique solutions. For well-posedness of the discrete Lagrange multiplier for-mulation (Lh) we need a discrete inf-sup estimate. This we will now derive. Weoutline the idea of the analysis. For the bilinear form b ( u , µ ) = ( u · n , µ ) L (Γ) usedin the continuous problem (L) we have, for arbitray µ ∈ L (Γ) and with ˆ u := µ n that b (ˆ u , µ ) = k µ k L (Γ) . Furthermore, a (ˆ u , ˆ u ) = k ˆ u N H k L (Γ) + k ˆ u k L (Γ) ≤ c k µ k L (Γ) holds. From this the inf-sup property of b ( · , · ) for the continuous problem can easilybe concluded. For deriving a discrete inf-sup result we combine this approach withperturbation arguments. In Lemma 5.3 we analyze the perturbation b h (ˆ u , µ ) − b (ˆ u , µ ),and in Lemma 5.4 we derive the discrete analogon of a (ˆ u , ˆ u ) ≤ c k µ k L (Γ) . Combiningthese results we obtain the discrete inf-sup property in Lemma 5.5. Lemma 5.3.
For h small enough the following inequality holds: b h ( µ h n , µ h ) & k µ h k M for all µ h ∈ V mh, Θ . Proof . Using Lemma 3.2 we get2 − n · n h ≤ k n − n h k L ∞ (Γ h ) . h k g a.e. on Γ h . Hence, there exists a constant c > − ch k g . n · n h . (5.7)Take µ h ∈ V mh, Θ . From the definition of b h ( · , · ) we obtain b h ( µ h n , µ h ) = ( µ h n · n h , µ h ) L (Γ h ) | {z } ( I ) + ρ Z Ω ΓΘ ( n Th ∇ ( µ h n ) n h )( n h · ∇ µ h ) dx | {z } ( II ) . (5.8)Using inequality (5.7) the term (I) can be estimated by( µ h n · n h , µ h ) L (Γ h ) & (1 − ch k g ) k µ h k L (Γ h ) , (5.9)and term (II) by ρ Z Ω ΓΘ ( n Th ∇ ( µ h n ) n h )( n h · ∇ µ h ) dx = ρ Z Ω ΓΘ ( n h · ∇ µ h )( n · n h )( n h · ∇ µ h ) dx + ρ Z Ω ΓΘ µ h ( n Th ∇ nn h )( n h · ∇ µ h ) dx & (1 − ch k g ) ρ k n h · ∇ µ h k L (Ω ΓΘ ) + ρ Z Ω ΓΘ µ h ( n Th ∇ nn h )( n h · ∇ µ h ) dx. (5.10) ince ∇ nn = 0 and n T ∇ n = 0 we get for the last term on the right hand side ρ Z Ω ΓΘ µ h ( n Th ∇ nn h )( n h · ∇ µ h ) dx = ρ Z Ω ΓΘ µ h (cid:0) ( n Th − n T ) ∇ n ( n h − n ) (cid:1) ( n h · ∇ µ h ) dx ≥ − ρ k∇ n k L ∞ (Ω ΓΘ ) k n h − n k L ∞ (Ω ΓΘ ) k µ h k L (Ω ΓΘ ) k n h · ∇ µ h k L (Ω ΓΘ ) & − h k g k µ h k L (Ω ΓΘ ) ρ k n h · ∇ µ h k L (Ω ΓΘ )(5.4) & − h k g k µ h k M ρ k n h · ∇ µ h k L (Ω ΓΘ ) & − h k g k µ h k M . Combined with (5.8), (5.9) and (5.10) we obtain b h ( µ h n , µ h ) & (cid:0) − ˜ ch k g (cid:1) k µ h k M & k µ h k M , provided h is sufficiently small. Lemma 5.4.
Take µ h ∈ V mh, Θ and define v h := I m Θ ( µ h n ) ∈ V mh, Θ . The followinginequality holds: k v h k A Lh . k µ h k M . Proof . Using the triangle inequality we get k v h k A Lh ≤ k µ h n k A Lh + k I m Θ ( µ h n ) − µ h n k A Lh . (5.11)We estimate the two terms on the right hand side. The definition of the norm implies k µ h n k A Lh = a h ( µ h n , µ h n ) + s h ( µ h n , µ h n ) . (5.12)The first term can be bounded by a h ( µ h n , µ h n ) . k∇ Γ h ( µ h n ) + ∇ T Γ h ( µ h n ) k L (Γ h ) + k µ h n k L (Γ h ) . k∇ Γ h ( µ h n ) k L (Γ h ) + k µ h n k L (Γ h ) . k P h ( n ( ∇ µ h ) T + µ h ∇ n ) P h k L (Γ h ) + k µ h k L (Γ h ) . k ( P h − P ) n ( ∇ µ h ) T P h k L (Γ h ) + k P h µ h ∇ nP h k L (Γ h ) + k µ h k M . h k g k∇ µ h k L (Γ h ) + k µ h k M (5.5) . ( h k g − + 1) k µ h k M . k µ h k M . (5.13)For the second term on the right hand side of equation (5.12) we get s h ( µ h n , µ h n ) = ρ k∇ ( µ h n ) n h k L (Ω ΓΘ ) ≤ ρ (cid:16) k n ( ∇ µ h ) · n h k L (Ω ΓΘ ) + k µ h ∇ nn h k L (Ω ΓΘ ) (cid:17) ∇ nn =0 . ρ (cid:16) k n h · ∇ µ h k L (Ω ΓΘ ) + k µ h ∇ n ( n h − n ) k L (Ω ΓΘ ) (cid:17) . k µ h k M + ρ h k g k µ h k L (Ω ΓΘ ) (5.4) . k µ h k M . ombining this with (5.13) we obtain k µ h n k A Lh . k µ h k M . (5.14)We now consider the second term of the right hand side (5.11). Using | µ h | H m +1 (Θ h ( T )) =0 for all T ∈ T Γ h and componentwise the interpolation result (3.1) we get k I m Θ ( µ h n ) − µ h n k A Lh . k∇ ( I m Θ ( µ h n ) − µ h n ) k L (Γ h ) + k I m Θ ( µ h n ) − µ h n k L (Γ h ) (5.15)+ ρ k∇ ( I m Θ ( µ h n ) − µ h n ) n h k L (Ω ΓΘ ) = X T ∈T Γ h (cid:16) k∇ ( I m Θ ( µ h n ) − µ h n ) k L (Γ T ) + k I m Θ ( µ h n ) − µ h n k L (Γ T ) + ρ k∇ ( I m Θ ( µ h n ) − µ h n ) n h k L (Θ h ( T )) (cid:17) (3.2) . X T ∈T Γ h (cid:16) h − k I m Θ ( µ h n ) − µ h n k L (Θ h ( T )) + ( h − + h + ρ ) k I m Θ ( µ h n ) − µ h n k H (Θ h ( T )) + h k I m Θ ( µ h n ) − µ h n k H (Θ h ( T )) (cid:17) . X T ∈T Γ h h m − k µ h n k H m +1 (Θ h ( T )) . X T ∈T Γ h h m − k µ h k H m (Θ h ( T ))inv. ineq. . X T ∈T Γ h h − k µ h k L (Θ h ( T )) . h − k µ h k L (Ω ΓΘ ) (5.4) . k µ h k M . Combining this with (5.11) and (5.14) we get the bound k v h k A Lh . k µ h k M .Using these results one easily obtains the following discrete inf-sup property for b h ( · , · ). Lemma 5.5.
Take m ≥ . There exists a constant c > , independent of h andof how Γ intersects the outer triangulation, such that, for h sufficiently small sup v h ∈ V mh, Θ b h ( v h , µ h ) k v h k A Lh & (cid:16) − c p ρh (cid:17) k µ h k M for all µ h ∈ V mh, Θ . (5.16) Proof . Take µ h ∈ V mh, Θ and define v h := I m Θ ( µ h n ) ∈ V mh, Θ . Using Lemma 5.3 weget | b h ( v h , µ h ) | ≥ | b h ( µ h n , µ h ) | − | b h ( I m Θ ( µ h n ) − µ h n , µ h ) | & k µ h k M − | b h ( I m Θ ( µ h n ) − µ h n , µ h ) | & k µ h k M − (cid:16) k I m Θ ( µ h n ) − µ h n k L (Γ h ) + ρ k∇ ( I m Θ ( µ h n ) − µ h n ) n h k L (Ω ΓΘ ) (cid:17) k µ h k M . Following the estimates used in (5.15) one obtains k I m Θ ( µ h n ) − µ h n k L (Γ h ) + ρ k∇ ( I m Θ ( µ h n ) − µ h n ) n h k L (Ω ΓΘ ) . ρh k µ h k M . Combining these results with Lemma 5.4 we getsup v h ∈ V mh, Θ b h ( v h , µ h ) k v h k A Lh & (cid:16) − c p ρh (cid:17) k µ h k M for all µ h ∈ V mh, Θ , hich completes the proof. Corollary 5.6.
Take m ≥ . Consider ρ = c α h − α , α ∈ [0 , and assume h ≤ h ≤ . Take c α such that < c α < c − h α − with c as in (5.16) . Thenthere exists a constant d > , independent of h and of how Γ intersects the outertriangulation, such that: sup v h ∈ V mh, Θ b h ( v h , µ h ) k v h k A Lh ≥ d k µ h k M for all µ h ∈ V mh, Θ . Assumption 5.1.
We restrict to ρ = c α h − α , α ∈ [0 , , with c α as in Corollary5.6. Corollary 5.7.
Under Assumption 5.1 the discrete inf-sup property for b h ( · , · ) holds for the pair of spaces ( V kh, Θ , V k l h, Θ ) with ≤ k l ≤ k . The constant in the discreteinf-sup property estimate depends on k l but is independent of h and of how Γ intersectsthe outer triangulation. From the fact that A Lh ( · , · ) defines a scalar product on V kh, Θ , cf. Lemma 5.2, andthe discrete inf-sup property of b h ( · , · ) on V kh, Θ × V k l h, Θ it follows that problem (Lh)has a unique solution. Note that to show the discrete inf-sup property of b h ( · , · ) thestabilization ˜ s h ( · , · ) is essential. As usual, the discretization error analysis is based on aStrang Lemma which bounds the discretization error in terms of an approximationerror and a consistency error. We derive such Strang lemmas for the three discreteproblems (P1h), (P2h) and (Lh). We first treat (P1h) and (P2h).
Theorem 5.8.
For the unique solution u = u ∗ T ∈ V T of problem (C) andthe unique solution u h ∈ V kh, Θ of problem (P1h) respectively (P2h) the followingdiscretization error bound holds for i = 1 , : k u e − u h k A Pih ≤ v h ∈ V kh, Θ k u e − v h k A Pih + sup w h ∈ V kh, Θ | A P i h ( u e , w h ) − ( f h , w h ) L (Γ h ) |k w h k A Pih . (5.17) Proof . The proof uses standard arguments. For an arbitrary v h ∈ V kh, Θ we have k u e − u h k A Pih ≤ k u e − v h k A Pih + k v h − u h k A Pih . (5.18)Using the definition of the norm and setting w h = v h − u h ∈ V kh, Θ results in k v h − u h k A Pih = A P i h ( v h − u h , v h − u h ) = A P i h ( v h − u h , w h ) ≤ | A P i h ( v h − u e , w h ) | + | A P i h ( u e − u h , w h ) |≤ k u e − v h k A Pih k w h k A Pih + | A P i h ( u e , w h ) − ( f h , w h ) L (Γ h ) | . Dividing by k w h k A Pih = k v h − u h k A Pih together with inequality (5.18) completes theproof. or the analysis of Problem (Lh) we define the bilinear form A h (( u , λ ) , ( v , µ )) := A Lh ( u , v )+ b h ( v , λ )+ b h ( u , µ ) , ( u , λ ) , ( v , µ ) ∈ V reg,h × V reg,h . From the well-posedness of the discrete problem (Lh) it follows that A h ( · , · ) fulfills adiscrete inf-sup property, i.e.sup ( v h ,µ h ) ∈ V kh, Θ × V klh, Θ A h (( u h , λ h ) , ( v h , µ h )) (cid:16) k v h k A Lh + k µ h k M (cid:17) & (cid:16) k u h k A Lh + k λ h k M (cid:17) (5.19)for all ( u h , λ h ) ∈ V kh, Θ × V k l h, Θ . This will be used for a proof of the following StrangLemma. Theorem 5.9.
Let ( u , λ ) = ( u ∗ T , λ ) ∈ V T × L (Γ) be the unique solution ofproblem (L) with g := f , and ( u h , λ h ) ∈ V kh, Θ × V k l h, Θ the unique solution of thediscrete problem (Lh) . The following discretization error bound holds: k u e − u h k A Lh + k λ e − λ h k M . min ( v h ,µ h ) ∈ V kh, Θ × V klh, Θ (cid:16) k u e − v h k A Lh + k λ e − µ h k M (cid:17) + sup ( w h ,ξ h ) ∈ V kh, Θ × V klh, Θ |A h (( u e , λ e ) , ( w h , ξ h )) − ( f h , w h ) L (Γ h ) | (cid:16) k w h k A Lh + k ξ h k M (cid:17) . (5.20) Proof . The discretization (Lh) can be formulated in terms of the bilinear form A h ( · , · ) on the product space V kh, Θ × V k l h, Θ . Using the discrete inf-sup property (5.19)and the continuity of A h ( · , · ) with respect to the product norm ( k · k A Lh + k · k M ) onecan apply the same arguments as in the proof of Theorem 5.8.In the following two sections we analyze the approximation errors and the con-sistency errors, which appear in the Strang lemmas above. In the following lemma we show approx-imation error bounds in the norms that occur in the Strang lemmas above.
Lemma 5.10.
For u ∈ H k +1 (Γ) and λ ∈ H k l +1 (Γ) the following approximationerror bounds hold: min v h ∈ V kh, Θ k u e − v h k A Pih . ( h k + η h k +1 ) k u k H k +1 (Γ) , i = 1 , ( v h ,µ h ) ∈ V kh, Θ × V klh, Θ (cid:16) k u e − v h k A Lh + k λ e − µ h k M (cid:17) . h k k u k H k +1 (Γ) + ( h k l +1 + ρ h k l + ) k λ k H kl +1 (Γ) . (5.22) Proof . We start with the k · k A P h -norm. Let u ∈ H k +1 (Γ) and w h := I k Θ ( u e ) thecomponent-wise parametric interpolation. We then havemin v h ∈ V kh, Θ k u e − v h k A P h ≤ k u e − w h k A P h = a h ( u e − w h , u e − w h ) + s h ( u e − w h , u e − w h ) + k h ( u e − w h , u e − w h ) . or the first term we get using component-wise Lemma 3.1 a h ( u e − w h , u e − w h ) . k E h ( u e − w h ) k L (Γ h ) + k u e − w h k L (Γ h ) . k∇ ( u e − w h ) k L (Γ h ) + k u e − w h k L (Γ h ) . h k k u k H k +1 (Γ) . (5.23)The second term leads to s h ( u e − w h , u e − w h ) = ρ k∇ ( u e − w h ) n h k L (Ω ΓΘ ) ≤ ρ k u e − w h k H (Ω ΓΘ ) (3.1) . ρh k k u e k H k +1 (Ω ΓΘ ) (3.3) . ρh k +1 k u k H k +1 (Γ) . (5.24)For the third term we obtain k h ( u e − w h , u e − w h ) = η k ( u e − w h ) · ˜ n h k L (Γ h ) . η k u e − w h k L (Γ h ) Lemma . . ηh k +1) k u k H k +1 (Γ) . Combining this with (5.23), (5.24) and ρ . h − proves the bound for the k·k A P h -norm.Since a T,h ( u e − w h , u e − w h ) . k E T,h ( u e − w h ) k L (Γ h ) + k P h ( u e − w h ) k L (Γ h ) . k E h ( u e − w h ) k L (Γ h ) + k u e − w h k L (Γ h ) we also immediately get the bound for the k · k A P h -norm. Hence, the result in (5.21)holds. Now derive the result (5.22). Since k u e − w h k A Lh = a h ( u e − w h , u e − w h ) + s h ( u e − w h , u e − w h ) , we use the estimates in (5.23) and (5.24). To show the approximation error bound inthe k · k M -norm we take λ ∈ H k l +1 (Γ) and define ξ h := I k l Θ ( λ e ). Then we havemin µ h ∈ V klh, Θ k λ e − µ h k M ≤ k λ e − ξ h k M . k λ e − ξ h k L (Γ h ) + ρ k n h · ∇ ( λ e − ξ h ) k L (Ω ΓΘ )Lemma . . h k l +1 k λ k H kl +1 (Γ) + ρ k λ e − ξ h k H (Ω ΓΘ )(3.1) , (3.3) . ( h k l +1 + ρ h k l ) k λ k H kl +1 (Γ) , which completes the proof.Note that in (5.21), (5.22) we obtain optimal order approximation errors, provided ρ . h − and η . h − . In this section we present a consistency er-ror analysis. The analysis is rather long and technical. The structure is a follows. Insection 5.4.1 we collect a few basic results for vector functions u ∈ H (Γ) and corre-sponding extensions u e ∈ H (Γ h ) . These results are rather straightforward and verysimilar to known results for scalar surface functions. In section 5.4.2 we derive boundsfor basic components of the consistency error that are directly related to the geometryapproximation Γ h ≈ Γ. We derive, for example, a bound for | a h ( v , w ) − a ( v l , w l ) | .A key result is derived in section 5.5, namely a discrete Korn-type inequality. Usingthese preparations, the consistency bounds for the three methods are derived in thesections 5.5.1 and 5.5.2. .4.1. Preliminaries. We start with results concerning the transformation ofthe integrals between Γ and Γ h . Using ∇ p = P − d H we get for u ∈ H (Γ) and x ∈ Γ h ∇ Γ h u e ( x ) = ∇ Γ h ( u ◦ p )( x ) = P h ( x ) ∇ p ( x ) ∇ u ( p ( x ))= P h ( x )( P ( x ) − d ( x ) H ( x )) ∇ u ( p ( x )) = B T ( x ) ∇ Γ u ( p ( x )) , (5.25)with B = B ( x ) := P ( I − d H ) P h ( x ∈ Γ h ).From [10] we have the following Lemma: Lemma 5.11.
For x ∈ Γ h and B = B ( x ) as above, the map B | range( P h ( x )) isinvertible for h small enough, i.e. there is B − : range( P ( x )) → range( P h ( x )) suchthat BB − = P , B − B = P h and we have for u ∈ H (Γ) , x ∈ Γ h , ∇ Γ u ( p ( x )) = P ( x ) B − T ( x ) ∇ Γ h u e ( x ) . Furthermore, the following estimates hold: k B k L ∞ (Γ h ) . , k P h B − P k L ∞ (Γ h ) . k PP h − B k L ∞ (Γ h ) . h k g +1 , k P h P − P h B − P k L ∞ (Γ h ) . h k g +1 . For the surface measures on Γ and Γ h we have the identity d Γ = | B | d Γ h where | B | = | det ( B ) | and we have the estimates k − | B |k L ∞ (Γ h ) . h k g +1 , k| B |k L ∞ (Γ h ) . , k| B | − k L ∞ (Γ h ) . . Applying Lemma 5.11 yields, for u ∈ H (Γ), ∇ Γ u l ( p ( x )) = P ( x ) B − T ( x ) ∇ Γ h u ( x ) , x ∈ Γ h . Similar useful transformation results for vector-valued functions are given in the fol-lowing corollary.
Corollary 5.12.
For u ∈ H (Γ) and v ∈ H (Γ h ) we have ( ∇ u e P ) e = ∇ u e P = ∇ u e P h B − P on Γ h , (cid:0) ∇ v l P (cid:1) e = ∇ v l P = ∇ v l P h B − P on Γ h . Proof . For u ∈ H (Γ) we have with (5.25) and Lemma 5.11 e Ti ∇ u e P h = ( ∇ u ei ) T P h = ( ∇ Γ h u ei ) T = ( B T ∇ Γ u i ◦ p ) T = ( B T P ∇ u ei ) T = e Ti ∇ u e PB on Γ h for i = 1 , ,
3. Multiplying by B − P from the right results in the equation above. For v ∈ H (Γ h ) we use similar arguments: e Ti ∇ v l P h = ( ∇ v li ) T P h = ( ∇ Γ h v i ) T = ( B T ∇ Γ v li ◦ p ) T = ( B T P ∇ v li ) T = e Ti ∇ v li PB on Γ h . or i = 1 , ,
3. Multiplying by B − P from the right completes the proof.For scalar-valued functions w ∈ H (Γ h ) the following equivalences are well known(see [3]): k w k L (Γ h ) ∼ k w l k L (Γ) , k∇ Γ h w k L (Γ h ) ∼ k∇ Γ w l k L (Γ) . We need similar equivalences for vector-valued functions. These are given in thefollowing lemma.
Lemma 5.13.
For v ∈ H (Γ h ) we have k v k L (Γ h ) ∼ k v l k L (Γ) , (5.26) k∇ v l P h k L (Γ h ) ∼ k∇ v l P k L (Γ) . (5.27) Proof . Let v ∈ H (Γ h ) . We start with the first equivalence. Using the definitionof the lifting, i.e. v l ( p ( x )) = v ( x ) for x ∈ Γ h , and the integral transformation rule,with Lemma 5.11 we obtain (5.26). For the second norm equivalence we use Corollary5.12: k∇ v l P h k L (Γ h ) = Z Γ h ∇ v l P h : ∇ v l P h ds h = Z Γ h (cid:0) ∇ v l P (cid:1) e B : (cid:0) ∇ v l P (cid:1) e B ds h = Z Γ ∇ v l P ( B ◦ p − ) : ∇ v l P ( B ◦ p − ) | B | − ◦ p − ds . k B ◦ p − k L ∞ (Γ) k| B | − ◦ p − k L ∞ (Γ) k∇ v l P k L (Γ) . k∇ v l P k L (Γ) , where p − is the inverse of p | Γ h . The other direction is obtained with similar argu-ments: k∇ v l P k L (Γ) = Z Γ ∇ v l P : ∇ v l P ds = Z Γ h (cid:0) ∇ v l P (cid:1) e : (cid:0) ∇ v l P (cid:1) e | B | ds h = Z Γ h ∇ v l P h B − P : ∇ v l P h B − P | B | ds h . k P h B − P k L ∞ (Γ h ) k| B |k L ∞ (Γ h ) k∇ v l P h k L (Γ h ) . k∇ v l P h k L (Γ h ) . In this section we analyze certain parts of the con-sistency error, which are similar in the three discretizations. For this we introducefurther notation. We define, for v , w ∈ V reg,h : G a ( v , w ) := a h ( v , w ) − a ( v l , w l ) , G a T ( v , w ) := a T,h ( v , w ) − a T ( v l , w l ) ,G f ( w ) := ( f , w l ) L (Γ) − ( f h , w ) L (Γ h ) . et u = u T be the solution of (C) and w h ∈ V kh, Θ . The consistency term correspond-ing to (P1h) can be written as A P h ( u e , w h ) − ( f h , w h ) L (Γ h ) = a h ( u e , w h ) + s h ( u e , w h ) + k h ( u e , w h ) − ( f h , w h ) L (Γ h ) − a ( u , Pw lh ) + ( f , w lh ) L (Γ) | {z } =0 = a h ( u e , w h ) − a ( u , w lh ) + s h ( u e , w h ) + k h ( u e , w h ) (5.28)+ ( E ( u ) , E (( w lh · n ) n )) L (Γ) + ( f , w lh ) L (Γ) − ( f h , w h ) L (Γ h ) = G a ( u e , w h ) + ( E ( u ) , E (( w lh · n ) n )) L (Γ) + s h ( u e , w h ) + k h ( u e , w h ) + G f ( w h ) . Similarly, for (P2h) we get A P h ( u e , w h ) − ( f h , w h ) L (Γ h ) = G a T ( u e , w h ) + s h ( u e , w h ) + k h ( u e , w h ) + G f ( w h ) . (5.29)Let ( u , λ ) be the solution of problem (L). With ( w h , µ h ) ∈ V kh, Θ × V k l h, Θ we get, A h (( u e , λ e ) , ( w h , µ h )) − ( f h , w h ) L (Γ h ) = a h ( u e , w h ) + b h ( w h , λ e ) + b h ( u e , µ h ) + s h ( u e , w h ) − ( f h , w h ) L (Γ h ) (5.30)+ ( f , w lh ) L (Γ) − a ( u , w lh ) − ( w lh · n , λ ) L (Γ) | {z } =0 = G a ( u e , w h ) + b h ( w h , λ e ) + b h ( u e , µ h ) + s h ( u e , w h ) − ( w lh · n , λ ) L (Γ) + G f ( w h ) . For the derivation of bounds for the geometry errors G a ( · , · ), G a T ( · , · ) and G f ( · ) weuse the following lemma. We use the notation E T ( w ) := E ( Pw ) = E ( w ) − w N H , for w ∈ H (Γ) . Lemma 5.14.
For v ∈ H (Γ h ) the following bounds hold: k ( ∇ Γ v l ) e − ∇ Γ h v k L (Γ h ) . h k g k v k H (Γ h ) , k ( E ( v l )) e − E h ( v ) k L (Γ h ) . h k g k v k H (Γ h ) , k ( E T ( v l )) e − E T,h ( v ) k L (Γ h ) . h k g (cid:0) k v k H (Γ h ) + h − k v · n h k L (Γ h ) (cid:1) . (5.31) Proof . We start with the first inequality. Using Corollary 5.12 we can write( ∇ Γ v l ) e − ∇ Γ h v = ( P ∇ v l P ) e − P h ∇ v l P h = P ∇ v l P h B − P − P h ∇ v l P h = ( P − P h ) ∇ v l P h B − P + P h ∇ v l P h ( P h B − P − P h P )+ P h ∇ v l P h ( P − P h ) . Hence, with Lemma 5.11 we get k ( ∇ Γ v l ) e − ∇ Γ h v k L (Γ h ) . k P − P h k L ∞ (Γ h ) k∇ v l P h k L (Γ h ) k P h B − P k L ∞ (Γ h ) + k P h k L ∞ (Γ h ) k∇ v l P h k L (Γ h ) k P h B − P − P h P k L ∞ (Γ h ) + k P h k L ∞ (Γ h ) k∇ v l P h k L (Γ h ) k P − P h k L ∞ (Γ h ) . h k g k∇ v l P h k L (Γ h ) , hich shows the first inequality in (5.31). Combining this with( E ( v l )) e − E h ( v ) = 12 (cid:0) ( ∇ Γ v l ) e + ( ∇ T Γ v l ) e (cid:1) − (cid:0) ∇ Γ h v + ∇ T Γ h v (cid:1) = 12 (cid:0) ( ∇ Γ v l ) e − ∇ Γ h v (cid:1) + 12 (cid:0) ( ∇ T Γ v l ) e − ∇ T Γ h v (cid:1) we obtain the second inequality in (5.31). For the last inequality in (5.31) we note( E T ( v l )) e − E T,h ( v ) = ( E ( v l )) e − E h ( v ) − (( v · n ) H ) e + ( v · n h ) H h . Applying Lemma 3.2 and inequality (4.2) we obtain k (( v · n ) H ) e − ( v · n h ) H h k L (Γ h ) . k ( v · ( n − n h )) H k L (Γ h ) + k ( v · n h )( H − H h ) k L (Γ h ) . h k g k v k L (Γ h ) + h k g − k v · n h k L (Γ h ) . h k g (cid:0) k v k L (Γ h ) + h − k v · n h k L (Γ h ) (cid:1) . Combining this with the second inequality in (5.31) we obtain the third one.
Lemma 5.15.
Let f h be an approximation of f such that k| B | f e − f h k L (Γ h ) . h k g +1 k f k L (Γ) holds. For v , w ∈ H (Γ h ) we then have | G a T ( v , w ) | . h k g (cid:0) k v k H (Γ h ) + h − k v · n h k L (Γ h ) (cid:1) (cid:0) k w k H (Γ h ) + h − k w · n h k L (Γ h ) (cid:1) , | G a ( v , w ) | . h k g k v k H (Γ h ) k w k H (Γ h ) , | G f ( w ) | . h k g +1 k f k L (Γ) k w k L (Γ h ) . Proof . We start with the estimate for the geometric error G a T ( · , · ). Using thedefinitions we can write G a T ( v , w ) = a T,h ( v , w ) − a T ( v l , w l )= ( E T,h ( v ) , E T,h ( w )) L (Γ h ) − ( P h v , P h w ) L (Γ h ) − ( E T ( v l ) , E T ( w l )) L (Γ) + ( Pv l , Pw l ) L (Γ) . (5.32)Combining the second and fourth term we get( P h v , P h w ) L (Γ h ) − ( Pv l , Pw l ) L (Γ) = ( P h v , P h w ) L (Γ h ) − ( | B | Pv , Pw ) L (Γ h ) . Using an obvious splitting, k P − P h k L ∞ (Γ h ) . h k g and k − | B |k L ∞ (Γ h ) . h k g +1 weobtain a bound . h k g k v k L (Γ h ) k w k L (Γ h ) . For the first and third term of the righthand side of equation (5.32) we have( E T,h ( v ) , E T,h ( w )) L (Γ h ) − ( E T ( v l ) , E T ( w l )) L (Γ) = ( E T,h ( v ) , E T,h ( w )) L (Γ h ) − ( | B | ( E T ( v l )) e , ( E T ( w l )) e ) L (Γ h ) . Using a similar splitting, the third inequality in Lemma 5.14, k ( E T ( v l )) e k L (Γ h ) . k v k H (Γ h ) , k ( E T ( w l )) e k L (Γ h ) . k w k H (Γ h ) and combining this with the result abovewe obtain the bound for G a T ( · , · ). For G a ( v , w ) = a h ( v , w ) − a ( v l , w l )= ( E h ( v ) , E h ( w )) L (Γ h ) − ( v , w ) L (Γ h ) − ( E ( v l ) , E ( w l )) L (Γ) + ( v l , w l ) L (Γ) ery similar arguments can be applied. Finally, the bound for G f ( · ) follows from | G f ( w ) | = | ( f , w l ) L (Γ) − ( f h , w ) L (Γ h ) | = (cid:12)(cid:12) ( | B | f e , w ) L (Γ h ) − ( f h , w ) L (Γ h ) (cid:12)(cid:12) . k| B | f e − f h k L (Γ h ) k w k L (Γ h ) . h k g +1 k f k L (Γ) k w k L (Γ h ) . Remark 5.1.
If in Lemma 5.15, for v , w we take u e , w e , with smooth function u , w ∈ H (Γ), it may be possible to improve the bounds. This can be relevant inthe derivation of L -norm optimal error bounds (which we do not consider in thispaper). In geometry error bounds derived inLemma 5.15 we obtain natural terms of the form k w h k H (Γ h ) , with w h ∈ V kh, Θ . Thesehave to be controlled in terms of the discrete energy norms, cf. Strang-Lemmas. Akey tool for quantifying this control is a discrete Korn’s type inequality that is derivedin this section. This result can be understood as an analogon of so-called discrete H type bounds derived for higher order surface finite element spaces in [10]. Lemma 5.16.
For h sufficiently small the following holds: k v h k H (Γ h ) . k E T,h ( v h ) k L (Γ h ) + k P h v h k L (Γ h ) + h − k v h · n h k L (Γ h ) + h − k∇ v h n h k L (Ω ΓΘ ) , for all v h ∈ V kh, Θ . (5.33) Proof . From Lemma 5.13 it follows k v h k H (Γ h ) . k v lh k H (Γ) . k Pv lh k H (Γ) + k v lh · n k H (Γ) . (5.34)The term with the tangential part, k Pv lh k H (Γ) , can be bounded using the surfaceKorn inequality (Lemma 2.1) and Lemma 5.14: k Pv lh k H (Γ) . k E ( Pv lh ) k L (Γ) + k Pv lh k L (Γ) = k E T ( v lh ) k L (Γ) + k Pv lh k L (Γ) . k (cid:0) E T ( v lh ) (cid:1) e k L (Γ h ) + k Pv h k L (Γ h ) . k E T,h ( v h ) k L (Γ h ) + k (cid:0) E T ( v lh ) (cid:1) e − E T,h ( v h ) k L (Γ h ) + k Pv h k L (Γ h ) . k E T,h ( v h ) k L (Γ h ) + h k g (cid:0) k v h k H (Γ h ) + h − k v h · n h k L (Γ h ) (cid:1) + k Pv h k L (Γ h ) . For h sufficiently small the term h k g k v h k H (Γ h ) can be moved to the left hand sidein (5.34). Hence, for the term k Pv lh k H (Γ) we have a desired bound as in (5.33). Wenow treat the normal component k v lh · n k H (Γ) . Note that k v lh · n k H (Γ) . k v lh · n k L (Γ) + k∇ Γ ( v lh · n ) k L (Γ) . k v h k L (Γ h ) + k∇ Γ h ( v h · n ) k L (Γ h ) . k v h k L (Γ h ) + k P h ( ∇ v h ) T n k L (Γ h ) . (5.35)We introduce the linear parametric interpololation of n , ˆ n h := I n . For this interpo-lation we have k∇ ˆ n h k L ∞ (Ω ΓΘ ) . , k ˆ n h − n k L ∞ (Ω ΓΘ ) . h, k ˆ n h − n h k L ∞ (Ω ΓΘ ) . h. Note that v h · ˆ n h ∈ V k +1 h, Θ . We obtain: k P h ( ∇ v h ) T n k L (Γ h ) . k ( ∇ v h ) T ˆ n h k L (Γ h ) + h k∇ v h k L (Γ h ) . k∇ ( v h · ˆ n h ) k L (Γ h ) + k v h k L (Γ h ) + h k∇ v h k L (Γ h ) . sing this in (5.35) and applying the estimate (5.6) yields k v lh · n k H (Γ) . k v h k L (Γ h ) + h k∇ v h n h k L (Ω ΓΘ ) + k∇ ( v h · ˆ n h ) k L (Γ h ) . (5.36)Using (5.5) and (5.3) we get k∇ ( v h · ˆ n h ) k L (Γ h ) . h − k v h · ˆ n h k L (Γ h ) + h − k n h · ∇ ( v h · ˆ n h ) k L (Ω ΓΘ ) . k v h k L (Γ h ) + h − k v h · n h k L (Γ h ) + h − k∇ v h ˆ n h k L (Ω ΓΘ ) + h − k v h k L (Ω ΓΘ ) . k v h k L (Γ h ) + h − k v h · n h k L (Γ h ) + h − k∇ v h n h k L (Ω ΓΘ ) + h − k v h k L (Ω ΓΘ ) . k v h k L (Γ h ) + h − k v h · n h k L (Γ h ) + h − k∇ v h n h k L (Ω ΓΘ ) . From this and (5.36) we get k v lh · n k H (Γ) . k v h k L (Γ h ) + h − k v h · n h k L (Γ h ) + h − k∇ v h n h k L (Ω ΓΘ ) . k P h v h k L (Γ h ) + h − k v h · n h k L (Γ h ) + h − k∇ v h n h k L (Ω ΓΘ ) , hence, also for the normal part we have a bound as in (5.33), which completes theproof. Corollary 5.17.
For h sufficiently small and for arbitrary v h ∈ V kh, Θ we have k v h k H (Γ h ) . k E T,h ( v h ) k L (Γ h ) + k P h v h k L (Γ h ) + h − k v h · ˜ n h k L (Γ h ) (5.37)+ h − k∇ v h n h k L (Ω ΓΘ ) , k v h k H (Γ h ) . k E h ( v h ) k L (Γ h ) + k v h k L (Γ h ) + h − k v h · ˜ n h k L (Γ h ) (5.38)+ h − k∇ v h n h k L (Ω ΓΘ ) . Proof . Note that k v h · n h k L (Γ h ) . k v h · ˜ n h k L (Γ h ) + h k g k v h k L (Γ h ) . Hence, theresult (5.37) is a consequence of (5.33). Using the definitions of E T,h ( · ), E h ( · ) and atriangle inequality the result (5.38) immediately follows from (5.37). Remark 5.2.
From the proof one can see that in the estimate (5.33) the part k E T,h ( v h ) k L (Γ h ) + k P h v h k L (Γ h ) is the key term to bound the H (Γ h ) norm of thetangential component of the vector function v h , and the part h − k v h · n h k L (Γ h ) + h − k∇ v h n h k L (Ω ΓΘ ) is essential to bound the normal component. (P1h) and (P2h) . Basedon the results obtained in the previous sections the derivation of satisfactory consis-tency error bounds is straightforward. In this section we derive these bounds for thetwo penalty methods. Using the definitions of the bilinear forms A P i h ( · , · ), i = 1 ,
2, weobtain from Corollary 5.17, for ρ ∼ h − and η & h − : k v h k H (Γ h ) . A P i h ( v h , v h ) , for all v h ∈ V kh, Θ . (5.39) Lemma 5.18.
Let u = u T ∈ V T be the unique solution of problem (C) . Weassume that the data error satisfies k| B | f e − f h k L (Γ h ) . h k g +1 k f k L (Γ) and ρ ∼ h − , & h − . Then the following bounds hold sup w h ∈ V kh, Θ | A P h ( u e , w h ) − ( f h , w h ) L (Γ h ) |k w h k A P h . ( h k g + η h k p + η − ) k u k H (Γ) , (5.40)sup w h ∈ V kh, Θ | A P h ( u e , w h ) − ( f h , w h ) L (Γ h ) |k w h k A P h . ( h k g + η h k p ) k u k H (Γ) . (5.41) Proof . We start with (5.41). Take w h ∈ V kh, Θ . We have, cf. (5.29), A P h ( u e , w h ) − ( f h , w h ) L (Γ h ) = G a T ( u e , w h ) + s h ( u e , w h ) + k h ( u e , w h ) + G f ( w h ) . Using Lemma 5.15, (5.39) and k u e · n h k L (Γ h ) = k u e · ( n h − n ) k L (Γ h ) . h k g k u e k L (Γ h ) ,we get | G a T ( u e , w h ) | . h k g (cid:0) k u e k H (Γ h ) + h − k u e · n h k L (Γ h ) (cid:1) (cid:0) k w h k H (Γ h ) + h − k w h · n h k L (Γ h ) (cid:1) . h k g k u k H (Γ) (cid:0) k w h k H (Γ h ) + h − k w h · ˜ n h k L (Γ h ) (cid:1) . h k g k u k H (Γ) k w h k A P h . (5.42)We also have | G f ( w h ) | . h k g +1 k f k L (Γ) k w h k L (Γ h ) . h k g +1 k u k H (Γ) k w h k A P h . (5.43)Using inequality (3.3) we obtain | s h ( u e , w h ) | . h − k∇ u e n h k L (Ω ΓΘ ) k∇ w h n h k L (Ω ΓΘ ) . h − k∇ u e ( n h − n ) k L (Ω ΓΘ ) k∇ w h n h k L (Ω ΓΘ ) . h k g − k∇ u e k L (Ω ΓΘ ) k∇ w h n h k L (Ω ΓΘ ) . h k g − k u k H (Γ) k∇ w h n h k L (Ω ΓΘ ) . h k g k u k H (Γ) k w h k A P h . (5.44)The penalty term can be estimated as follows: | k h ( u e , w h ) | . η k u e · ˜ n h k L (Γ h ) k w h · ˜ n h k L (Γ h ) . η k u e · (˜ n h − n ) k L (Γ h ) k w h k A P h . η h k p k u k L (Γ) k w h k A P h . Combining these estimates completes the proof for (5.41). Next we show (5.40).Recall that, cf. (5.28), A P h ( u e , w h ) − ( f h , w h ) L (Γ h ) = G a ( u e , w h ) + ( E ( u ) , E (( w lh · n ) n )) L (Γ) + s h ( u e , w h ) + k h ( u e , w h ) + G f ( w h ) . The terms G a ( · , · ), s h ( · , · ), k h ( · , · ) and G f ( · ) can be estimated as above. We treat theremaining term. Note that (cf. (2.4)), E (( w lh · n ) n ) = ( w lh · n ) H . Using the lemmas .11 and 5.13 we get | ( E ( u ) , ( w lh · n ) H ) L (Γ) | = | ( | B | ( E ( u )) e , ( w h · n ) H ) L (Γ h ) | . k u e k H (Γ h ) k w h · n k L (Γ h ) . k u k H (Γ) k w h · n k L (Γ h ) . k u k H (Γ) ( k w h · ( n − ˜ n h ) k L (Γ h ) + k w h · ˜ n h k L (Γ h ) ) . k u k H (Γ) ( h k p k w h k L (Γ h ) + k w h · ˜ n h k L (Γ h ) ) . ( h k p + η − ) k u k H (Γ) k w h k A P h . (5.45)Since k p ≥ k g we get (5.40).In Lemma 5.18, for the stability and penalty parameters we restrict to ρ ∼ h − and η & h − . For these parameter values we then have the estimate (5.39), which isused at several places in the proof. (Lh) . In this sectionwe derive bounds for the Lagrange multiplier method. For this method we do nothave an analog of (5.39) of the form k v h k H (Γ h ) . A Lh ( v h , v h ) for all v h ∈ V kh, Θ .Such an estimate is problematic, because the term h − k v h · n h k L (Γ h ) that occurs inthe discrete Korn’s type inequality (5.33) can not be controlled by the bilinear form A Lh ( · , · ). Instead we only have the (weaker) bound k v h k H (Γ h ) . h − A Lh ( v h , v h ) for all v h ∈ V kh, Θ , (5.46)which follows from (5.6) and the definition of A Lh ( · , · ), cf. Remark 6.1. Lemma 5.19.
Let ( u , λ ) ∈ V ∗ × L (Γ) be the unique solution of problem (L) . Wefurther assume that the data error satisfies k| B | f e − f h k L (Γ h ) . h k g +1 k f k L (Γ) andAssumption 5.1 holds. Then we obtain the following bound: sup ( w h ,µ h ) ∈ V kh, Θ × V klh, Θ |A h (( u e , λ e ) , ( w h , µ h )) − ( f h , w h ) L (Γ h ) | (cid:16) k w h k A Lh + k µ h k M (cid:17) . h k g − k u k H (Γ) + h k g k λ k H (Γ) . Proof . Take ( w h , µ h ) ∈ V kh, Θ × V k l h, Θ . Using (5.30) we obtain A h (( u e , λ e ) , ( w h , µ h )) − ( f h , w h ) L (Γ h ) = G a ( u e , w h ) + b h ( w h , λ e ) + b h ( u e , µ h ) + s h ( u e , w h ) − ( w lh · n , λ ) L (Γ) + G f ( w h )= G a ( u e , w h ) | {z } (1) + ( w h · n h , λ e ) L (Γ h ) − ( w lh · n , λ ) L (Γ) | {z } (2) + ( u e · n h , µ h ) L (Γ h ) | {z } (3) + s h ( u e , w h ) | {z } (4) + ˜ s h ( w h , λ e ) | {z } (5) + ˜ s h ( u e , µ h ) | {z } (6) + G f ( w h ) | {z } (7) . We derive bounds for these seven terms. We start with term (1). Applying Lemma5.15 and (5.46) we get | G a ( u e , w h ) | . h k g k u e k H (Γ h ) k w h k H (Γ h ) . h k g − k u k H (Γ) k w h k A Lh . ith Lemma 5.11 we obtain for term (2) | ( w h · n h , λ e ) L (Γ h ) − ( w lh · n , λ ) L (Γ) | = | ( w h · n h , λ e ) L (Γ h ) − ( | B | ( w h · n ) , λ e ) L (Γ h ) | = | ( w h · ( n h − n ) , λ e ) L (Γ h ) − (( | B | − w h · n ) , λ e ) L (Γ h ) | . h k g k λ k L (Γ) k w h k A Lh . For term (3) we have | ( u e · n h , µ h ) L (Γ h ) | = | ( u e · ( n h − n ) , µ h ) L (Γ h ) | . k n h − n k L ∞ (Γ h ) k u e k L (Γ h ) k µ h k L (Γ h ) . h k g k u k L (Γ) k µ h k M . The terms (4) and (7) can be estimated as in (5.44) and (5.43): | s h ( u e , w h ) | . h k g k u k H (Γ) k w h k A Lh , | G f ( w h ) | . h k g +1 k u k H (Γ) k w h k A Lh . Finally, for the terms (5), (6) we can apply arguments as in (5.44), resulting in | ˜ s h ( w h , λ e ) | . h k g k λ k H (Γ) k w h k A Lh , | ˜ s h ( u e , µ h ) | . h k g k u k H (Γ) k µ h k M . Combining the bounds for these terms which completes the proof.Note that compared to the consistency error bounds for the penalty methodsin Lemma 5.18, for the Lagrange multiplier method we (only) have h k g − k u k H (Γ) (instead of h k g k u k H (Γ) ). The loss of one power in h is caused by the estimate (5.46). We combine the Strang-Lemma 5.8 andthe bounds for the approximation error and the consistency error to obtain boundsfor the discretization error in the energy norms. We first consider the inconsistentpenalty formulation (P1h).
Theorem 5.20.
Let u ∈ V and u h ∈ V kh, Θ be the solution of (C) and of (P1h) , respectively. We assume that the data error satisfies k| B | f e − f h k L (Γ h ) . h k g +1 k f k L (Γ) and ρ ∼ h − , η & h − . Then the following bound holds k u e − u h k A P h . ( h k + η h k +1 ) k u k H k +1 (Γ) + ( h k g + η h k p + η − ) k u k H (Γ) . (5.47) Remark 5.3.
We discuss this error bound. For linear finite elements, i.e. k = 1, k g = 1 (linear geometry approximation), k p = 2 (higher order normal approximationin the penalty term) and η ∼ h − we obtain an optimal order error bound. However,for higher order finite elements, i.e. k ≥
2, we are not able to choose the otherparameters ( k g , k p , η ) such that we have an optimal order error bound. If we balancethe terms η h k +1 and η − this yields η ∼ h − ( k +1) . Using this parameter choice and k g = k (isoparametric case), k p = k + 2 (higher order normal approximation in thepenalty term), we obtain an (suboptimal) error bound of the order h ( k +1) . Thissuboptimal result is due to the factor η − in the error bound, which is caused (only)by the estimate for the inconstency term ( E ( u ) , ( w lh · n ) H ) L (Γ) in (5.45).Next we consider the consistent penalty formulation (P2h). Theorem 5.21.
Let u ∈ V and u h ∈ V kh, Θ be the solution of (C) and of (P2h) , respectively. We assume that the data error satisfies k| B | f e − f h k L (Γ h ) . h k g +1 k f k L (Γ) and ρ ∼ h − , η & h − . Then the following bound holds: k u e − u h k A P h . ( h k + η h k +1 ) k u k H k +1 (Γ) + ( h k g + η h k p ) k u k H (Γ) . (5.48) emark 5.4. Note that the bound in (5.48) is the same as in (5.47), except forthe term η − that occurs in (5.47) due to the inconsistency of the method (P1h). Inview of the factor η h k +1 we take η ∼ h − . Based on the consistency error term wetake k g = k (isoparametric case) and k p = k + 1 (higher order normal approximationin the penalty term). This then yields an optimal order error bound.The same estimates as in (5.47) and (5.48) also hold with the energy norm k · k A Pih replaced by the H (Γ h ) norm: Corollary 5.22.
Let u ∈ V , u h ∈ V kh, Θ , ˜ u h ∈ V kh, Θ be solution of (C) , (P1h) and of (P2h) , respectively. The following discretization errror bounds hold: k u e − u h k H (Γ h ) . ( h k + η h k +1 ) k u k H k +1 (Γ) + ( h k g + η h k p + η − ) k u k H (Γ) , k u e − ˜ u h k H (Γ h ) . ( h k + η h k +1 ) k u k H k +1 (Γ) + ( h k g + η h k p ) k u k H (Γ) . Proof . We show the first bound. The second bound can be shown analogously.Using Lemma 3.1 and inequality (5.39) we get k u e − u h k H (Γ h ) ≤ k u e − I k Θ ( u e ) k H (Γ h ) + k I k Θ ( u e ) − u h k H (Γ h ) . h k k u k H k +1 (Γ) + k I k Θ ( u e ) − u h k A P h . Since k I k Θ ( u e ) − u h k A P h ≤ k I k Θ ( u e ) − u e k A P h + k u e − u h k A P h we get the desired result using Lemma 5.10 and Theorem 5.21Finally we consider the Lagrange multiplier formulation. Theorem 5.23.
Let ( u , λ ) ∈ V ∗ × L (Γ) and ( u h , λ h ) ∈ V kh, Θ × V k l h, Θ be thesolution of (L) and of (Lh) , respectively. We assume that the data error satisfies k| B | f e − f h k L (Γ h ) . h k g +1 k f k L (Γ) and Assumption 5.1 holds. Then we obtain thefollowing error bound: k u e − u h k A Lh + k λ e − λ h k M . h k k u k H k +1 (Γ) + ( h k l +1 + ρ h k l + ) k λ k H kl +1 (Γ) + h k g − k u k H (Γ) + h k g k λ k H (Γ) Remark 5.5.
In the case of isoparametric finite elements, i.e. k = k g , we do not get an optimal order error bound. For the case of superparametric finite elements, i.e. k g = k + 1, we distinguish two cases. First, for k l = k (same degree finite elementsfor the Lagrange multiplier as for the primal variable) we can take any ρ = c α h − α , α ∈ [0 , c α as in Corollary 5.6. For k l = k − k ≥
2) we restrict to ρ = c α h with c α as in Corollary 5.6. In both cases we then obtain an optimal order errorbound.
6. Numerical experiments.
In this section we present results of a few numer-ical experiments. We implemented the different methods in Netgen/NGSolve withngsxfem [1, 13]. or Γ we take the unit sphere which is characterized by the zero level of thedistance function function φ ( x ) = p x + x + x − x = ( x , x , x ) T . The surface isembedded in the domain Ω = [ − . , . . We start with an unstructured tetrahedralNetgen-mesh with h max = 0 . u ∗ ( x ) = P ( x ) − x x + x + x , x p x + x + x , x p x + x + x ! T . The solution is tangential, i.e. Pu ∗ = u ∗ , and constant in normal direction, i.e. u ∗ = ( u ∗ ) e . The right-hand side f is computed according to equation (2.1).We first consider the penalty formulations (P1h) and (P2h). The normal approx-imation ˜ n h used in the penalty term is computed as follows. We interpolate the exactlevel set function φ in the finite element space V k p h, Θ , which we denote by ˜ φ h , and thenset ˜ n h := ∇ ˜ φ h k∇ ˜ φ h k . For the approximation of the Weingarten mapping (needed onlyin (P2h)) we take H h = ∇ ( I k g Θ ( n h )). The resulting linear systems are solved using adirect solver.We start with problem (P1h). In Figure 6.1 the error measured in the k · k A P h -norm is shown, for different choices of parameters and refinement levels.1 2 3 4 5 610 − − − Refinement level E rr o r k = 1, η = h − , k p = 2 k = 1, η = h , k p = 1 k = 2, η = h − , k p = 3 k = 2, η = h − , k p = 4 O ( h ) O ( h . )Fig. 6.1: k · k A P h -error for problem (P1h) with k g = k and ρ = h − .For isoparametric linear finite elements ( k = k g = 1) and a one order highernormal approximation for the penalty term ( k p = 2) we observe optimal O ( h )-convergence. Choosing the same order for the normal approximation ( k p = 1) we donot have convergence. In the experiment with k = k g = k p = 1 we used η = 10 · h − (instead of η = h − ) to have a bigger constant in the term η h k p in (5.47), in order tosee the loss of one order more clearly. For the case k = 2 we do not observe optimal second order) convergence in Figure 6.1. For k = 2, η = h − , k p = 3 we obtain(only) first order convergence, whereas for k = 2, η = h − , k p = 4 the error behavesas ∼ h . . All these results are in agreement with the bounds in Theorem 5.20, cf.Remark 5.3.Next we consider problem (P2h). In Figure 6.2 we show the discretization errormeasured in the k·k A P h -norm for different choices of parameters and refinement levels.For isoparametric finite elements ( k = k g ) and a one order higher normal approxi-mation for the penalty term ( k p = k + 1) we observe optimal O ( h k )-convergence for k = 1 , . . . ,
3. For isoparametric quadratic finite elements ( k = k g = 2) and a normalapproximation of order two in the penalty term ( k p = 2) we observe a loss of oneorder, i.e. O ( h )-convergence. All these results are in agreement with the bounds inTheorem 5.21, cf. Remark 5.4.1 2 3 4 5 610 − − − − − − Refinement level E rr o r k = 1, k p = 2 k = 2, k p = 3 k = 2, k p = 2 k = 3, k p = 4 O ( h ) O ( h ) O ( h )Fig. 6.2: k · k A P h -error for problem (P2h) with k g = k , ρ = h − and η = h − .Finally we present results for problem (Lh). The exact Lagrange multiplier λ is computed according to equation (2.12). We use a preconditioned MINRES solverwith a block diagonal preconditioner as introduced in [6] to solve the linear systems.In Figure 6.3 we present the error k u ∗ − u h k A Lh and in one case the error k λ − λ h k M ,which is labeled with an M , for different choices of parameters and refinement levels(note that the two curves for k l = 1 are almost indistinguishable).We take ρ = h − for superparametric finite elements ( k g = k + 1) and ρ = h forisoparametric finite elements ( k g = k ). For superparametric finite elements ( k g = k +1) with k l = k we observe optimal O ( h k )-convergence. For these cases the error k λ − λ h k M has the same convergence order (not shown). However, iso parametric quadraticfinite elements ( k = k g = 2) with k l = k results in optimal O ( h )-convergence for k u ∗ − u h k A Lh but sub optimal O ( h )-convergence for k λ − λ h k M (shown with label M in the figure). This shows that the power k g − h k g − in Theorem 5.23is sharp and superparametric finite elements ( k g = k + 1) are necessary to obtain − − − − Refinement level E rr o r k = k l = 1, k g = 2 k = k l = k g = 2 k = k l = k g = 2, Mk = k l = 2, k g = 3 k = 2, k l = 1, k g = 3 O ( h ) O ( h )Fig. 6.3: k · k A Lh -error and k · k M -error for problem (Lh).an optimal order of convergence (for both primal variable and Lagrange multiplier).Taking ρ = h − in this case results in better than O ( h )-convergence but clearly lessthan O ( h )-convergence (not shown). For superparametric quadratic finite elements( k = 2, k g = 3) with k l = 1 we observe (only) O ( h )-convergence. All these results arein agreement with Theorem 5.23, cf. Remark 5.5.A drawback of the Lagrange multiplier method compared with the two penaltymethods is the fact that (in our experience) the resulting saddle point system is (much)more difficult to solve. The condition number of this matrix is typically very large,in particular for the case ρ = h .We did not derive L -error bounds and therefore do not present numerical resultsfor L -errors. We note, however, that for the cases of optimal O ( h k )-convergence inthe energy norms we also have O ( h k +1 )-convergence in the L -norm. In case of theLagrange multiplier method (Lh) we observe this optimal L -norm convergence onlyfor tangential error component, i.e. k P h ( u ∗ − u h ) k L (Γ h ) . An analysis of L -normconvergence is left for future research. Remark 6.1.
For the problem considered in this section we performed an exper-iment to see whether the h − factor in the estimate (5.46) is sharp. We numericallycomputed c h := min v h ∈ V kh, Θ A Lh ( v h , v h ) k v h k H (Γ h ) for the parameter values ρ = h , k = 1 and k g = 2 as well as k g = 1. The resultsclearly indicate a c h ∼ h behavior. REFERENCES[1]
Netgen/NGSolve . https://ngsolve.org/ (17 April 2019).312]
M. Arroyo and A. DeSimone , Relaxation dynamics of fluid membranes , Phys. Rev. E, 79(2009), p. 031915.[3]
G. Dziuk , Finite elements for the Beltrami operator on arbitrary surfaces , in Partial differentialequations and calculus of variations, S. Hildebrandt and R. Leis, eds., vol. 1357 of LectureNotes in Mathematics, Springer, 1988, pp. 142–155.[4]
G. Dziuk and C. M. Elliott , Finite element methods for surface PDEs , Acta Numerica, 22(2013), pp. 289–396.[5]
J. Grande, C. Lehrenfeld, and A. Reusken , Analysis of a high-order trace finite elementmethod for pdes on level set surfaces , SIAM Journal on Numerical Analysis, 56 (2018),pp. 228–255.[6]
S. Gross, T. Jankuhn, M. Olshanskii, and A. Reusken , A trace finite element method forvector-Laplacians on surfaces , SIAM Journal on Numerical Analysis, 56 (2018), pp. 2406–2429.[7]
M. E. Gurtin and A. I. Murdoch , A continuum theory of elastic material surfaces , Archivefor Rational Mechanics and Analysis, 57 (1975), pp. 291–323.[8]
A. Hansbo and P. Hansbo , An unfitted finite element method, based on Nitsche’s method, forelliptic interface problems , Comput. Methods Appl. Mech. Engrg., 191 (2002), pp. 5537–5552.[9]
P. Hansbo and M. G. Larson , A stabilized finite element method for the Darcy problem onsurfaces , IMA Journal of Numerical Analysis, 37 (2016), pp. 1274–1299.[10]
P. Hansbo, M. G. Larson, and K. Larsson , Analysis of finite element methods for vectorLaplacians on surfaces , arXiv preprint arXiv:1610.06747, (2016).[11]
T. Jankuhn, M. A. Olshanskii, and A. Reusken , Incompressible fluid problems on embed-ded surfaces: Modeling and variational formulations , Interfaces and Free Boundaries, 20(2018), pp. 353–377.[12]
H. Koba, C. Liu, and Y. Giga , Energetic variational approaches for incompressible fluidsystems on an evolving surface , Quart. Appl. Math., 75 (2017), pp. 359–389.[13]
C. Lehrenfeld , ngsxfem . https://github.com/ngsxfem (17 April 2019).[14] C. Lehrenfeld , High order unfitted finite element methods on level set domains using isopara-metric mappings , Computer Methods in Applied Mechanics and Engineering, 300 (2016),pp. 716–733.[15]
C. Lehrenfeld and A. Reusken , Analysis of a high-order unfitted finite element method forelliptic interface problems , IMA Journal of Numerical Analysis, 38 (2017), pp. 1351–1387.[16]
T.-H. Miura , On singular limit equations for incompressible fluids in moving thin domains ,Quart. Appl. Math., 76 (2018), pp. 215–251.[17]
I. Nitschke, S. Reuther, and A. Voigt , Hydrodynamic interactions in polar liquid crystalson evolving surfaces , Phys. Rev. Fluids, 4 (2019), p. 044002.[18]
M. A. Olshanskii, A. Quaini, A. Reusken, and V. Yushutin , A finite element method forthe surface Stokes problem , SIAM J. Sci. Comp., 40 (2018), pp. A2492–A2518.[19]
M. A. Olshanskii and A. Reusken , Trace finite element methods for PDEs on surfaces ,in Geometrically Unfitted Finite Element Methods and Applications, S. P. A. Bordas,E. Burman, M. G. Larson, and M. A. Olshanskii, eds., Cham, 2017, Springer InternationalPublishing, pp. 211–258.[20]
A. Reusken , Analysis of trace finite element methods for surface partial differential equations ,IMA Journal of Numerical Analysis, 35 (2015), pp. 1568–1590.[21]
S. Reuther and A. Voigt , The interplay of curvature and vortices in flow on curved surfaces ,Multiscale Modeling & Simulation, 13 (2015), pp. 632–643.[22]
J. Sch¨oberl , Netgen an advancing front 2d/3d-mesh generator based on abstract rules , Com-puting and Visualization in Science, 1 (1997), pp. 41–52.[23]