The role of mesh quality and mesh quality indicators in the Virtual Element Method
Tommaso Sorgente, Silvia Biasotti, Gianmarco Manzini, Michela Spagnuolo
TThe role of mesh quality and mesh quality indicators inthe Virtual Element Method
T. Sorgente a , S. Biasotti a , G. Manzini a , M. Spagnuolo a a Istituto di Matematica Applicata e Tecnologie Informatiche, Consiglio Nazionale delle Ricerche, Italy
Abstract
Since its introduction, the Virtual Element Method (VEM) was shown to be able to deal with a large variety of polygons, whileachieving good convergence rates. The regularity assumptions proposed in the VEM literature to guarantee the convergence on atheoretical basis are therefore quite general. They have been deduced in analogy to the similar conditions developed in the FiniteElement Methods (FEMs) analysis. In this work, we experimentally show that the VEM still converges with almost optimal ratesand low errors in the L and H norms even if we significantly break the regularity assumptions that are used in the literature.These results suggest that the regularity assumptions proposed so far might be overestimated. We also exhibit examples on whichthe VEM sub-optimally converges or diverges. Finally, we introduce a mesh quality indicator that experimentally correlates theentity of the violation of the regularity assumptions and the performance of the VEM solution, thus predicting if a dataset ispotentially critical for VEM. Key words: virtual element method, polygonal mesh, mesh regularity assumptions, mesh quality indicators, small edges, 2D Poisson problem
1. Introduction
Finite element methods are very successful in the numerical treatment of partial differential equations (PDEs), buttheir formulation requires an explicit knowledge of the basis functions. Consequently, they are mostly restricted tomeshes with elements having a simple geometrical shape, such as triangles or quadrilaterals. This restriction is over-come by polytopal element methods such as the VEM, which are designed to provide arbitrary order of accuracy onmore generally shaped elements. In the VEM setting, we partition the computational domain into polytopal elementsand the explicit knowledge of the basis functions is not required, since the VEM formulation and its practical imple-mentation is based on suitable polynomial projections that are always computable from a careful choice of the degreesof freedom.The VEM was originally formulated in [6] as a conforming FEM for the Poisson problem by rewriting in a vari-ational setting the nodal mimetic finite difference (MFD) method [11, 15, 23, 36] for solving diffusion problems onunstructured polygonal meshes. A survey on the MFD method can be found in the review paper [34] and the researchmonograph [12]. The VEM scheme inherits the flexibility of the MFD method with respect to the admissible meshesand this feature is well reflected in the many significant applications that have been developed so far, see, for ex-ample, [3, 4, 8–10, 13, 14, 18–20, 25, 28, 29, 32, 39–41, 44]. Because of its origins, the VEM is intimately connectedwith other finite element approaches. The connection between the VEM and finite elements on polygonal/polyhedralmeshes is thoroughly investigated in [27, 33, 37], between VEM and discontinuous skeletal gradient discretizationsin [33], and between the VEM and the BEM-based FEM method in [26]. The VEM has been extended to convection-reaction-diffusion problems with variable coefficients in [9].Optimal convergence rates for the virtual element approximations of the Poisson equation were proved in H and L norms, see for instance [2,6,16,17,21,22,30]. The theoretical results behind the VEM convergence rate involve anestimate of the approximation error, which is due to both analytical assumptions (interpolation and polynomial pro-jections of the virtual element functions) and geometrical assumptions (the geometrical shape of the mesh elements). a r X i v : . [ m a t h . NA ] F e b here is a general concordance in the literature about the analytical assumptions, but the understanding of which geo-metrical features of the mesh elements influence the most on the approximation error and the convergence rate, is stillan open issue. Various geometrical (or regularity ) assumptions have been proposed to ensure that all elements of anymesh of a given mesh family in the refinement process are sufficiently regular. These assumptions guarantee the VEMconvergence and optimal estimates of the approximation error with respect to different norms. However, as alreadyobserved from the very first papers, cf. [2], the VEM seems to maintain its optimal convergence rates also when weuse mesh families that do not satisfy the usual geometrical assumptions.As a first contribution of this paper, we overview the geometrical assumptions introduced in the VEM literature toguarantee the convergence. Then, we define a mesh generation framework that allows us to build sequences of meshes(datasets) gradually introducing several pathologies. The so-generated datasets systematically violate the geometricalassumptions, and enhance a correlation analysis between such assumptions and the VEM performance. We experi-mentally show how the VEM presents a good convergence rate on most examples and only fails in very few situations.We also provide an indicator of the violation of the geometrical assumptions, which depends uniquely on the geometryof the mesh elements. We show a correspondence between this indicator and the performance of the VEM on a givenmesh, or mesh family, in terms of approximation error and convergence rate. Our work is focused on developing astrategy to evaluate if a given sequence of meshes is suited to the virtual element discretization, and possibly to predictthe behaviour of the numerical discretization before applying the method. In this sense, we can consider the approachthat we present in this work as more in an a priori than an a posteriori setting.The paper is organized as follows. In Section 2, we present the VEM and the convergence results for the Poissonequation with Dirichlet boundary conditions. In Section 3, we detail the geometrical assumptions on the mesh ele-ments that are used in the literature to guarantee the convergence of the VEM. In Section 4, we present a numberof datasets which do not satisfy these assumptions, and experimentally investigate the convergence of the VEM overthem. In Section 5, we propose a mesh quality indicator to predict the behaviour of the VEM over a given dataset. InSection 6, we offer our concluding remarks and discuss future developments and work. In Appendices A and B, wereview the major theoretical results on the error analysis that are available in the virtual element literature, reportingthe geometrical conditions assumed in each result, and present the algorithmic procedures that we used to build thedatasets.1.1. Notation and technicalities
We use the standard definition and notation of Sobolev spaces, norms and seminorms, cf. [1]. Let k be a nonnegativeinteger number. The Sobolev space H k ( ω ) consists of all square integrable functions with all square integrable weakderivatives up to order k that are defined on the open, bounded, connected subset ω of R d , d = 1 , . As usual, if k = 0 , we prefer the notation L ( ω ) . Norm and seminorm in H k ( ω ) are denoted by || · || k,ω and | · | k,ω , while forthe inner product in L ( ω ) we prefer the integral notation. We denote the space of polynomials of degree less than orequal to k ≥ on ω by P k ( ω ) and conventionally assume that P − ( ω ) = { } . In our implementation, we considerthe orthogonal basis on every mesh edge through the univariate Legendre polynomials and inside every mesh cellprovided by the Gram-Schmidt algorithm applied to the standard monomial basis.Finally, throughout the paper we use the letter C in the error inequalities to denote a real, positive constant thatcan have a different value at any occurrence. This constant may depend on the model and on some discretizationparameters, such as the coercivity and stability constants of the bilinear form and of the linear functional used inthe variational formulation, the mesh regularity constants used when defining the properties of the mesh families towhich the numerical method is suitable, and the polynomial order of the method. Nevertheless, this constant is alwaysassumed to be independent of the mesh size parameter h that characterizes the mesh and will be introduced in the nextsection.
2. The virtual element method
We investigate the performance of the VEM on the elliptic model problem provided by the Poisson equation withDirichlet boundary conditions. In this section, we briefly review the model equations in strong and weak form and theformulation of the virtual element approximation.
The Poisson equation and the virtual element approximation.
Let Ω be an open, bounded, connected subset of R with polygonal boundary Γ . Consider the Poisson equation with homogeneous Dirichlet boundary conditions in strong2rom: − ∆ u = f in Ω , (1) u = 0 on Γ . (2)The variational formulation of problem (1)-(2) reads as: Find u ∈ H (Ω) such that a ( u, v ) = F ( v ) ∀ v ∈ H (Ω) , (3)where the bilinear form a ( · , · ) : H (Ω) × H (Ω) → R is given by a ( u, v ) = (cid:90) Ω ∇ u · ∇ v d x (4)and the right-hand side linear functional F : L (Ω) → R is given by F ( v ) = (cid:90) Ω f v d x , (5)with the (implicit) assumption that f ∈ L (Ω) . The well-posedness of the discrete formulation (3) stems from thecoercivity and continuity of the bilinear form a ( · , · ) , the continuity of the right-hand side linear functional F ( · ) , andthe application of the Lax-Milgram theorem [42, Section 2.7].The numerical method that we consider in this paper is mainly based on References [2, 6], and provides an optimalapproximation on polygonal meshes when the diffusion coefficient is variable in space. To ease the presentation, weconsider the case of homogeneous Dirichlet boundary conditions, the extension to the non-homogeneous case beingdeemed as straightforward. Such a case is also considered in the numerical experiments carried out in this paper.The virtual element approximation of equation (3) reads as: Find u h ∈ V hk such that a h ( u h , v h ) = F h ( v h ) v h ∈ V hk , (6)where u h , V hk , a h ( · , · ) , F h ( · ) are the virtual element approximations of u , H (Ω) , a ( · , · ) , and F ( · ) . We review theconstruction of these mathematical objects in the rest of this section. Mesh notation.
Let T = { Ω h } h ∈H be a set of decompositions Ω h of the computational domain Ω into a finite setof nonoverlapping polygonal elements E . We refer to T as the mesh family and to each one of its members Ω h asthe mesh . The subindex label h , indicating the mesh size , is the maximum of the diameters of the mesh elements,defined by h E = sup x , y ∈ E | x − y | . We assume that the mesh sizes of the mesh family T are in a countable subset H of the real line (0 , + ∞ ) having as its unique accumulation point. Each element E has a nonintersecting polygonalboundary ∂E formed by straight edges e , center of gravity x E = ( x E , y E ) and area | E | . We denote the edge mid-point x e = ( x e , y e ) and its lenght | e | , and with a small abuse of notation, we write e ∈ ∂E to indicate that edge e isrunning throughout the set of edges forming the elemental boundary ∂E . The convergence analysis of the VEM andthe derivation of the error estimates in the L and H norms require a few suitable assumptions on the mesh family T .Such assumptions are discussed in detail in the next section. On every mesh Ω h , given an integer k ≥ , we define thespace of piecewise discontinuous polynomials of degree k , P k (Ω h ) , containing the functions q such that q | E ∈ P k ( E ) for every E ∈ Ω h . The virtual element spaces.
Let k ≥ be an integer number and E ∈ Ω h a generic mesh element. The conformingvirtual element space V hk of order k built on mesh Ω h is obtained by gluing together the elemental approximationspaces denoted by V hk ( E ) , so that V hk := (cid:110) v h ∈ H (Ω) : v h | E ∈ V hk ( E ) ∀ E ∈ Ω h (cid:111) . (7)The local virtual element space V hk ( E ) is defined in accordance with the enhancement strategy introduced in [2]: V hk ( E ) = (cid:26) v h ∈ H ( E ) : v h | ∂E ∈ C ( ∂E ) , v h | e ∈ P k ( e ) ∀ e ∈ ∂E, ∆ v h ∈ P k ( E ) , and (cid:90) E ( v h − Π ∇ ,Ek v h ) q dV = 0 ∀ q ∈ P k ( E ) \ P k − ( E ) (cid:27) , (8)where Π ∇ ,Ek is the elliptic projection that will be discussed in the next section; P k ( E ) and P k ( e ) are the linear spacesof the polynomials of degree at most k , which are respectively defined over an element E or an edge e according to our3otation; and P k ( E ) \ P k − ( E ) is the space of polynomials of degree equal to k − and k . By definition, the space V hk ( E ) contains P k ( E ) and the global space V hk is a conforming subspace of H (Ω) . The elliptic projection operators.
The definition in (8) requires the elliptic projection operator Π ∇ ,Ek : H ( E ) → P k ( E ) , which, for any v h ∈ V hk ( E ) , is given by: (cid:90) E ∇ Π ∇ ,Ek v h · ∇ q dV = (cid:90) E ∇ v h · ∇ q dV ∀ q ∈ P k ( E ) , (9) (cid:90) ∂E (cid:0) Π ∇ ,Ek v h − v h (cid:1) dS = 0 . (10)Equation (10) allows us to remove the kernel of the gradient operator from the definition of Π ∇ ,Ek , so that the k -degreepolynomial Π ∇ ,Ek v h is uniquely defined for every virtual element function v h ∈ V hk ( E ) . Moreover, projector Π ∇ ,Ek is a polynomial-preserving operator, i.e., Π ∇ ,Ek q = q for every q ∈ P k ( E ) . We can also define a global projectionoperator Π ∇ k : H (Ω) → P k (Ω h ) , which is such that Π ∇ k v h | E = Π ∇ ,Ek ( v h | E ) ∀ E ∈ Ω h . A major property of theelliptic projection operator is that every projection Π ∇ ,Ek v h of a virtual element function v h ∈ V hk ( E ) is computablefrom the degrees of freedom of v h associated with element E that are defined as follows. The degrees of freedom.
The degrees of freedom of a virtual element function v h ∈ V hk ( E ) are given by the followingset of values [6]: (D1) for k ≥ , the values of v h at the vertices of E ; (D2) for k ≥ , the values of v h at the k − internal points of the ( k + 1) -point Gauss-Lobatto quadrature rule onevery edge e ∈ ∂E . (D3) for k ≥ , the cell moments of v h of order up to k − on element E : | E | (cid:90) E v h q dV ∀ q ∈ P k − ( E ) . (11)These set of values are unisolvent in V hk ( E ) , cf. [6]; hence, every virtual element function is uniquely identified bythem. The degrees of freedom of a virtual element function in the global space V hk are given by collecting the elementaldegrees of freedom (D1) - (D3) . Their unisolvence in V hk is an immediate consequence of their unisolvence in everyelemental space V hk ( E ) . Orthogonal projections.
From the degrees of freedom of a virtual element function v h ∈ V hk ( E ) we can also computethe orthogonal projections Π ,Ek v h ∈ P k ( E ) , cf. [2]. In fact, the orthogonal projection Π ,Ek v h of a function v h ∈ V hk ( E ) is the solution of the variational problem: (cid:90) E Π k v h q dV = (cid:90) E v h q dV ∀ q ∈ P k ( E ) . (12)The right-hand side is the integral of v h against the polynomial q , and is computable from the degrees of freedom (D3) of v h when q is a polynomial of degree up to k − , and from the moments of Π ∇ ,Ek v h when q is a polynomial of degree k − and k , cf. (8). Clearly, the orthogonal projection Π ,Ek − v h is also computable. As we have done for the ellipticprojection, we can also define a global projection operator Π k : L (Ω) → P ( Ω h ) , which projects the virtual elementfunctions on the space of discontinuous polynomials of degree at most k built on mesh Ω h . This operator is given bytaking the elemental L -orthogonal projection Π ,Ek v h in every mesh element E , so that (cid:0) Π k v h (cid:1) | E = Π ,Ek ( v h | E ) ,which is computable from the degrees of freedom of v h associated with element E . Approximation properties in the virtual element space.
Under a suitable regularity assumption on the mesh familyused in the formulation of the VEM (assumption G1 that will be the topic of the next section), we can prove thefollowing estimates on the projection and interpolation operators:(i) for every s with ≤ s ≤ k + 1 and for every w ∈ H s ( E ) there exists a w π ∈ P k ( E ) such that | w − w π | ,E + h E | w − w π | ,E ≤ Ch sE | w | s,E ; (13)(ii) for every s with ≤ s ≤ k +1 , for every h , for all E ∈ Ω h and for every w ∈ H s ( E ) there exists a w I ∈ V hk ( E ) such that | w − w I | ,E + h E | w − w I | ,E ≤ Ch sE | w | s,E . (14)4n these inequalities, C is a real positive constant depending only on the polynomial degree k and on some meshregularity constants that we will introduce and discuss in the next section. The virtual element bilinear forms.
The elliptic and orthogonal projections are needed to define the virtual elementbilinear form a h ( · , · ) : V hk × V hk → R , and the forcing term F h : V hk → R . Following the “VEM gospel”, we writethe discrete bilinear form a h ( · , · ) as the sum of elemental contributions a h ( u h , v h ) = (cid:88) E ∈ Ω h a Eh ( u h , v h ) , (15)where every elemental contribution is a bilinear form a Eh ( · , · ) : V hk ( E ) × V hk ( E ) → R designed to approximate thecorresponding elemental bilinear form a E ( · , · ) : H ( E ) × H ( E ) → R , a E ( v, w ) = (cid:90) E ∇ v · ∇ w dV, ∀ v, w ∈ H ( E ) . The bilinear form a Eh ( · , · ) on each element E is given by a Eh ( u h , v h ) = (cid:90) E ∇ Π ∇ ,Ek u h · ∇ Π ∇ ,Ek v h dV + S Eh (cid:16)(cid:0) I − Π ∇ ,Ek (cid:1) u h , (cid:0) I − Π ∇ ,Ek (cid:1) v h (cid:17) . (16)The bilinear form S Eh ( · , · ) in the definition of a Eh ( · , · ) provides the stability term and can be any computable, sym-metric, positive definite bilinear form defined on V hk ( E ) for which there exist two positive constants c ∗ and c ∗ suchthat c ∗ a E ( v h , v h ) ≤ S Eh ( v h , v h ) ≤ c ∗ a E ( v h , v h ) ∀ v h ∈ V hk ( E ) ∩ ker (cid:0) Π ∇ ,Ek (cid:1) . (17)The inequalities in (17) implies that S Eh ( · , · ) scales like a E ( · , · ) with respect to h E . Also, the stabilization term in thedefinition of a Eh ( · , · ) is zero if at least one of its two entries is a polynomial of degree (at most) k , since Π ∇ ,Ek is apolynomial preserving operator.In our implementation of the VEM, we consider the stabilization proposed in [38]: S Eh ( v h , w h ) = N dofs (cid:88) i =1 A Eii
DOF i ( v h ) DOF i ( w h ) , (18)where A E = (cid:0) A Eij (cid:1) is the matrix resulting from the implementation of the first term in the bilinear form a Eh ( · , · ) . Let φ i be the i -th “canonical” basis functions generating the virtual element space, which is the function in V hk ( E ) whose i -th degree of freedom for i = 1 , . . . , N dofs (according to a suitable renumbering of the degrees of freedom in (D1) , (D2) , and (D3) ), has value equal to and all other degrees of freedom are zero. These basis function are unknown inthe virtual element framework, but their projections Π ,Ek − ∇ φ i (and Π ,Ek − ∇ φ j ) are computable from their degrees offreedom. With this notation, the i, j -th entry of matrix A E is given by A Eij := a E (cid:0) Π ∇ ,Ek φ i , Π ∇ ,Ek φ j (cid:1) . (19)The stabilization in (18) is sometimes called the “ D-recipe stabilization ” in the virtual element literature, and containsthe so called “ dofi-dofi (dd) stabilization ” originally proposed in [6] as the special case with A ii = 1 : S E, dd h ( v h , w h ) = N dofs (cid:88) i =1 DOF i ( v h ) DOF i ( w h ) . (20)We explicitly mention the stabilization (20) because many convergence results available from the literature, which webriefly review in Appendix A, are obtained by using it.The stabilization term, and, in particular, condition (17), is designed so that a Eh ( · , · ) satisfies the two fundamentalproperties: - k -consistency : for all v h ∈ V hk ( E ) and for all q ∈ P k ( E ) it holds that a Eh ( v h , q ) = a E ( v h , q ); (21) - stability : there exist two positive constants α ∗ , α ∗ , independent of h and E , such that α ∗ a E ( v h , v h ) ≤ a Eh ( v h , v h ) ≤ α ∗ a E ( v h , v h ) ∀ v h ∈ V hk ( E ) . (22)5 he virtual element forcing term. To approximate the right-hand side of (6), we split it into the sum of elementalcontributions and every local linear functional is approximated by using the orthogonal projection Π ,Ek v h : F ( v h ) = (cid:88) E ∈ Ω h (cid:0) f, Π ,Ek v h (cid:1) E . where (cid:0) f, Π ,Ek v h (cid:1) E = (cid:90) E f Π ,Ek v h dV. (23) Main convergence properties.
The well-posedness of the discrete formulation (6) stems from the coercivity of thebilinear form a h ( · , · ) , the continuity of the right-hand side linear functional (cid:0) f, Π k · (cid:1) and the application of the Lax-Milgram theorem [42, Section 2.7].In this work, we are interested in checking whether the VEM mantains optimal convergence rates on differentmesh families that may display some pathological situations. From a theoretical viewpoint, the convergence estimateshold under some constraints on the shapes of the elements forming the mesh, called mesh geometrical (or regularity)assumptions . We summarize the major findings from the literature in Appendix A and in the next sections we willinvestigate how breaking such constraints may affect these results.Let u ∈ H k +1 (Ω) be the solution to the variational problem (3) on a convex domain Ω with f ∈ H k (Ω) . Let u h ∈ V hk be the solution of the virtual element method (6) on every mesh of a mesh family T = { Ω h } satisfying asuitable set of mesh geometrical assumptions. Then, a strictly positive constant C exists such that– the H -error estimate holds: || u − u h || , Ω ≤ Ch k ( || u || k +1 , Ω + | f | k, Ω ) ; (24)– the L -error estimate holds: || u − u h || , Ω ≤ Ch k +1 ( || u || k +1 , Ω + | f | k, Ω ) . (25)Constant C may depend on the stability constants α ∗ and α ∗ , on mesh regularity constants which we will introducein the next section, on the size of the computational domain | Ω | , and on the approximation degree k . Constant C is normally independent of h , but for the most extreme meshes it may depend on the ratio between the longest andshortest edge lenghts, cf. Appendix A.Finally, we note that the approximate solution u h is not explicitly known inside the elements. Consequently, in thenumerical experiments of Section 4.2, we approximate the error in the L -norm as follows: || u − u h || , Ω ≈ || u − Π k u h || , Ω , where Π k u h is the global L -orthogonal projection of the virtual element approximation u h to u . On its turn, weapproximate the error in the energy norm as follows: | u − u h | , Ω ≈ a h ( u I − u h , u I − u h ) where u I is the virtual element interpolant of the exact solution u .
3. Geometrical Assumptions
In this section, we review the geometrical assumptions appeared in the VEM literature since their definition in [6].All the assumptions are defined for a single mesh Ω h , but the conditions contained in them are required to holdindependently of h . Therefore, when considering a mesh family T = { Ω h } h , these assumptions have to be verifiedsimultaneously by every Ω h ∈ T .It is well-known from the FEM literature that the approximation properties depend on specific assumptions on thegeometry of the elements. For example, classical geometrical assumptions for a family of triangulations (Ω h ) h → , arethe ones respectively introduced in [31] and [45]: ( a ) Shape regularity condition: there exists a real number γ ∈ (0 , , independent of h , such that ∀ E ∈ Ω h we have γh E ≤ r E , where h E and r E are, respectively, the longest edge in E and its inradius; ( b ) Minimum angle condition: there exists α > , independent of h , such that ∀ E ∈ Ω h we have α E ≥ α , where α E is the minimal angle of E . 6imilarly, in the VEM we need a set of geometrical assumptions to ensure approximation properties. The first pairof assumptions were proposed in [6] and remained untouched also in [2] and [21]. In these papers, the Authors assumethat a real constant ρ ∈ (0 , exists, independent of h , such that two conditions hold: Assumption G1
Every polygonal cell E ∈ Ω h is star-shaped with respect to a disc with radius ρh E . Assumption G2
For every polygonal cell E ∈ Ω h , the length | e | of every edge e ∈ ∂E satisfies | e | ≥ ρh E . Constant ρ is often referred to as mesh regularity constant or parameter . Condition G1 can be weakened in thefollowing way, as specified in [6] and more accurately in [21]: Assumption G1 - weak
Every polygonal cell E ∈ Ω h is the union of a finite number N of disjoint polygonal subcells E , . . . , E N such that, for j = 1 , . . . , N , ( a ) element E j is star-shaped with respect to a disc with radius ρh Ej ; ( b ) elements E j and E j +1 share a common edge. Assumption G1 (or G1 - weak ) is the polygonal extension of the classical conditions for triangular meshes, with h E indicating the elemental diameter instead of the longest edge. Under assumption G1 - weak , and therefore also under G1 , it can be proved [21] that the simplicial triangulation of E determined by the star-centers (the centers of the discsin G1 and G1 - weak ) of E , . . . , E N satisfies the shape regularity and the minimum angle conditions. Moreover, for ≤ j, k ≤ N it holds that h E j /h E k ≤ ρ −| j − k | .These assumptions are more restrictive than necessary, but at the same time they are not particularly demanding, sincethey allow the method to work on very general decompositions. This fact was already mentioned in the very firstpapers. For example, in [2, Ahmad et al.] the Authors say that: “Actually, we could get away with even more general assumptions, but then it would be long and boring to makeprecise (among many possible crazy decompositions that nobody will ever use) the ones that are allowed and theones that are not.” In [17] and [22] assumption G1 is preserved, but assumption G2 is substituted by the alternative version: Assumption G3
There exists a positive integer N , independent of h , such that the number of edges of every polygonalcell E ∈ Ω h is (uniformly) bounded by N . Assumption G2 implies assumption G3 . However, assumption G3 is weaker than assumption G2 , as it allows foredges arbitrarily small with respect to the element diameter. Both assumption pairs G1 + G2 and G1 + G3 imply that thenumber of vertices of E and the minimum angle of the simplicial triangulation of E given by connecting the verticesof E and its star-center, are controlled by ρ .Another step forward in the direction of refining the geometrical assumptions has been made in [16]. In addition toassumption G1 , the Authors imagine to unwrap the boundary ∂E of each polygon E ∈ Ω h onto an interval I E of thereal line, obtaining a one-dimensional mesh I E . The collection of the unwrapped boundaries of all elements in a mesh Ω h is denoted by {I E } E ∈ Ω h . Moreover, each one-dimensional mesh I E can be subdivided into a number of disjointsub-meshes I E , . . . , I NE , corresponding to the edges of E (we consider each I iE as a mesh as it may contain more thanone edge, see Fig. 1). Then, the following condition is assumed. Assumption G4
For every polygonal cell E ∈ Ω h , the family {I E } E ∈ Ω h is piecewise quasi-uniform, that is: ( a ) each mesh I E can be subdivided into at most N disjoint sub-meshes I E , . . . , I NE , for some N ∈ N ; ( b ) each sub-mesh I iE , i = 1 , . . . , N , is quasi uniform: the ratio between the largest and the smallest element in I iE is bounded from above by some c ∈ R + independent of h . Each polygon E is in a one-to-one correspondence to a one-dimensional mesh I E , but a sub-mesh I iE ⊂ I E mightcontain more than one edge of E . This implies that assumption G4 does not require a uniform bound on the numberof edges in each element and does not exclude the presence of small edges, cf. Fig. 1. For instance, the mesh familiescreated by agglomeration, cracking, gluing, etc.. of existing meshes are admissible according to G4 . Fig. 1. Examples of admissible elements according to assumption G4 . Red dots indicate the vertices of the element. G1 (or, equivalently, G1 - weak ) with either G2 or G3 or G4 .
4. Breaking the geometrical assumptions
In this section, we test the behaviour of the virtual element method on a number of mesh “datasets”, to stress oneor more of the geometrical assumptions discussed in Section 3. We call a dataset a collection D := { Ω n } n =0 ,...,N ofmeshes Ω n covering the domain Ω = (0 , such that - the mesh Ω n +1 has smaller mesh size than Ω n for every n = 0 , . . . , N − ; - the meshes Ω n follow a common refinement pattern, so that they contain similar polygons organized in similarconfigurations.Note that each mesh Ω n is uniquely identified by its size as Ω h , therefore we can consider a dataset D as a subset of amesh family: D = { Ω h } h ∈H (cid:48) ⊂ T where H (cid:48) is a finite subset of H .In addition to the violation of the geometrical assumptions, we are also interested in the behaviour of the VEMwhen the measures of mesh elements and edges scale in a nonuniform way in the refinement process. To this end, foreach mesh Ω n ∈ D we define the following quantities and study their trend for n → N : A n = max E ∈ Ω n | E | min E ∈ Ω n | E | and e n = max e ∈ Ω n | e | min e ∈ Ω n | e | . (26)We specifically designed six datasets in order to consider several possible combinations of the geometrical assumptionsof the previous section and the scaling indicators A n and e n , as shown in Table 1. Note that most of the considereddatasets do not fulfill any set of geometrical assumptions required by the convergence analysis found in the literature(see Appendix A).4.1. Datasets definition
We now introduce the datasets, describing for each of them how they are built, which geometrical assumptionsthey fulfill or violate, and how the indicators A n and e n depend on n in the limit for n → N . Each dataset is builtaround (and often named after) a particular polygonal element contained in it, which is meant to stress one or moreassumptions or indicators. The detailed construction algorithms, together with the explicit computations of A n and e n for all datasets, can be found in Appendix B. Reference dataset.
The first dataset, D Triangle , contains only triangular meshes that are built by inserting a numberof vertices in the domain through the Poisson Disk Sampling algorithm [24], and connecting them in a Delaunaytriangulation (see Appendix B.1). The refinement is obtained by increasing the number of vertices generated by thePoisson algorithm. The meshes in this dataset do not violate any of the geometrical assumptions and the indicators A n and e n are almost constant. We use D Triangle as the reference dataset to evaluate the other datasets by comparing theperformance of the VEM over them.
Hybrid datasets.
Next, we consider some hybrid datasets, characterized by a progressive insertion in Ω of one ormore identical polygonal elements (called the initial polygons ), the rest of the domain being tessellated by triangles.These triangles are created by the library Triangle [43], bounding the area of the triangular elements with the area ofthe initial polygons. Steiner points [43] can be added, and the edges of the initial polygons are split when necessaryby the insertion of new vertices. The refinement is iterative, with parameters to indicate size, shape and number of theinitial polygons; details on this process are provided in the Appendix B.2.The top and bottom panels of Fig. 2 respectively show the datasets D Maze and D Star , which we selected as theyviolate different geometrical assumptions. Other choices for the initial polygons are possible, for instance consideringthe ones in Benchmark [5].A “maze” is a -sided polygonal element E spiralling around an external point. Progressively, each mesh in D Maze contains an increasing number of mazes E with decreasing thickness as n → N . Every E is obviously not star-shaped,challenging assumption G1 . Moreover, the length of the shortest edge e of E decreases faster than the diameter h E ofthe polygon. This fact implies, on one side, that the ratio | e | /h E of assumption G2 cannot be bounded from below bya constant ρ that is independent of h , and, on the other side, that assumption G1-weak also fails. Indeed, even splitting E into a finite number of rectangles, it is not possible to define a global radius ρ , independent of h , with respect to8hich the union of these rectangles is star-shaped according to G1 , if the shortest edge of E is constantly decreasing.Concerning the scaling indicators, we have A n ∼ a n for a constant e < a < and e n ∼ n log( n ) .Dataset D Star is built by inserting star-like polygonal elements, still denoted by E . As n → N , the number ofspikes of each E increases and the inner vertices of the star move towards the barycenter of the element. In this case,assumption G3 is not satisfied because the number of spikes in each E increases from mesh to mesh. Therefore, thetotal number of vertices and edges in a single element cannot be bounded uniformly.Last, each star E is star-shaped with respect to the maximum circle inscribed in it. However, as shown in Fig. 3,the radius r of such circle decreases faster than the elemental diameter h E , therefore it is not possible to define aglobal ρ > able to uniformly bound from below the quantity r/h E : this violates assumption G1 . In order to satisfyassumption G1-weak , we should split each E into a number of sub-polygons that are star-shaped according to G1 .Independently of the way we partition E , the number of sub-polygons would always be bigger than or equal to thenumber of spikes in E , which is constantly increasing. So, the number of sub-polygons would tend to infinity violatingcondition G1-weak . Last, both A n and e n scale linearly. Fig. 2. Meshes Ω , Ω , Ω , Ω from datasets D Maze (top) and D Star (bottom).Fig. 3. Ratio r/h E for datasets D Star and D Jenga . Mirroring datasets.
Another possible strategy to build a sequence of meshes whose elements are progressivelysmaller, is to adopt a mirroring technique. In practice, we start with the first base mesh (cid:98) Ω , which coincides withthe first computational mesh Ω . At every step n ≥ , we build a new base mesh (cid:98) Ω n from the previous base mesh (cid:98) Ω n − . The computational mesh Ω n is then obtained by mirroring (cid:98) Ω n n times and resizing everything to fit the domain Ω . This construction allows us to obtain a number of vertices and degrees of freedom in each mesh that is comparableto that of the meshes at the same refinement level in datasets D Maze and D Star .Examples of meshes from mirrored datasets are presented in Fig. 4; examples of non-mirrored base meshes are visiblein Appendix B.3. Algorithms for the construction of the following datasets are reported in Appendix B.3, while themirroring algorithm is detailed in Appendix B.5. 9n the case of the dataset D Jenga , we build the n -th base mesh (cid:98) Ω n as follows. We start by drawing two horizontaledges that split the domain (0 , into three horizontal rectangles with area equal to / , / and / respectively.Then, we split the rectangle with area / vertically, into two equally-sized rectangles with area / . This providesus with base mesh (cid:98) Ω , which coincides with mesh Ω . At each next refinement step n ≥ , we split the left-mostrectangle in the middle of the base mesh (cid:98) Ω n − by adding a new vertical edge, and apply the mirroring technique toobtain Ω n . This process is shown in the top panels of Fig. 4.This mesh family breaks all assumptions G1 (and G1-weak ), G2 , G3 , and G4 . In fact, the length of the radius r ofthe biggest possible disc inscribed into a rectangle is equal to / of its shortest edge e . As shown in Fig. 3, the ratio | e | /h E , decreases unboundedly in the left rectangle E every time we split it, and consequently r/h E decreases at asimilar rate. This implies that a lower bound with a uniform constant ρ independent of h cannot exist for these ratios,thus breaking assumptions G1 , G1-weak and G2 . In addition, the number of edges of the top and bottom rectangularelements also grows unboundedly, against assumption G3 . Last, the one-dimensional mesh of assumption G4 , which isbuilt on the elemental boundary of the top and bottom rectangular elements, cannot be subdivided into a finite numberof quasi uniform sub-meshes. In fact, either we have infinite sub-meshes or an infinite edge ratio. Finally, we note thatboth A n and e n scale like n .In the case of the dataset D Slices (Fig. 4, middle), we build the n -th base mesh (cid:98) Ω n as follows. First, we samplea collection of points along the diagonal (the one connecting the vertices with coordinates (0 , and (1 , ) of thereference square [0 , , and connect them to the vertices (0 , and (1 , . In particular, at each step n ≥ , the basemesh (cid:98) Ω n contains the vertices (0 , and (1 , , plus the vertices with coordinates (2 − i , − − i ) and (1 − − i , − i ) for i = 1 , . . . , n + 2 . Then, we apply the mirroring technique.The dataset D Slices violates assumptions G1 and G1-weak . In fact, up to a multiplicative scaling factor dependingon h , the length of the radius of the biggest inscribed disc in every element E is decreasing faster than the diameter ofthe element, which is constantly equal to √ times the same scaling factor, thus violating G1 . Furthermore, the datasetalso breaks assumption G1-weak because any finite subdivisions of its elements would suffer the same issue. Instead,the other geometrical assumptions are satisfied. Since no edge is split, we find that e n ∼ c , while A n ∼ n .In D Ulike (Fig. 4, bottom), we build (cid:98) Ω n at each step n ≥ by inserting n equispaced U -shaped continuouspolylines inside the domain, creating as many U -like polygons. Then, we apply the mirroring technique.For arguments similar to the ones brought for D Maze , D Ulike does not satisfy assumptions G1 , G1-weak and G2 .For connectivity reasons, the lower side of the outer U -shaped polygon of every base mesh must be split into smallersegments when we apply the mirroring technique. Therefore, the number of edges of such cells cannot be limited fromabove, contradicting assumption G3 . Nonetheless, assumption G4 is satisfied because this subdivision is uniform.Last, edge lengths scale exponentially and areas scale uniformly, i.e., e n ∼ n , A n ∼ c . Fig. 4. Meshes Ω , Ω , Ω , Ω from datasets D Jenga (top), D Slices (middle) and D Ulike (bottom). able 1Summary of the geometrical conditions violated and the asymptotic trend of the indices A n and e n for all datasets ( a is a constant such that e < a < ). Assumption G1-weak is not explicitly reported because all the considered datasets that violate G1 , also violate G1-weak . dataset D Triangle D Maze D Star D Jenga D Slices D Ulike G1 × × × × × G2 × × × G3 × × × G4 × A n c a n n n n ce n c n log( n ) n n c n Multiple mirroring datasets.
As a final test, we modified datasets D Jenga , D Slices and D Ulike in order to stress theindicators A n and e n harder.This is easily obtained by inserting four new elements at each step instead of one, as explained in Appendix B.4.The resulting datasets, D Jenga4 , D Slices4 and D Ulike4 , are qualitatively similar to the mirroring datasets above. Thesedatasets fulfill the same assumptions as their respective original versions, but the number of elements at each refinementstep now increases four times faster. The indicators A n and e n change from n to n , but A n remains constant for D Ulike4 , and e n remains constant for D Slices4 .4.2.
Performance analysis
We solved the discrete Poisson problem (3) with the VEM (6) described in Section 2 for k = 1 , , over each meshof each of the datasets defined in Section 4.1, using as groundtruth the function u ( x, y ) = sin( πx ) sin( πy )2 π , ( x, y ) ∈ Ω = (0 , . (27)This function has homogeneous Dirichlet boundary conditions, and this choice was appositely made to prevent theboundary treatment from having an influence on the approximation error. In Fig. 5 and Fig. 6 we plot the relative L -norm || u − u h || , Ω / || u || , Ω and the relative H -seminorm | u − u h | , Ω / | u | , Ω (also called discrete energy norm ) ofthe approximation error as the number of DOFs increases (that is, as n → N ).We also consider the condition numbers of matrices G and H (with the notation adopted in [7]) as numerical in-dicators of the good behaviour of the method, and identities | Π ∇ k D − I | = 0 and | Π k D − I | = 0 as an estimate ofthe approximation error produced by projectors Π ∇ k and Π k , represented by matrices Π ∇ k and Π k , respectively. Thecomputation of the projectors is obviously affected by the condition numbers of G and H , but the two indicators arenot necessarily related. All of these quantities are computed element-wise and the maximum value among all elementsof the mesh is selected. Condition numbers and identity values for k = 1 , , are reported in Table 2 (for k < wehave Π k = Π ∇ k ).First, the reference dataset D Triangle guarantees for the correctness of the VEM, as it performs perfectly accordingto the theoretical results both in L and in H norms (the slopes being indicated by the triangles) for all k values,maintaining reasonable condition numbers and optimal errors on the projectors Π k and Π ∇ k .For the hybrid datasets D Star and D Maze , errors decrease at the correct rate for most of the meshes, and only startdeflecting for very high numbers of DOFs and very complicated meshes. These deflections are not due to numericalproblems, as in both datasets we have cond( G ) < and cond( H ) < , which are still reasonable values. Projectorsseem to work properly: | Π ∇ k D − I | remains below − and | Π k D − I | below − . In a preliminary stage of thiswork, we obtained similar plots (not reported here) using other hybrid datasets built in the same way, with polygonssurrounded by triangles. In particular, we did not see big differences when starting with the other initial polygons ofBenchmark [5], cf. the construction discussed in “Hybrid datasets” in Section 4.1.On the meshes from “Mirroring datasets”, A n or e n may scale non-uniformly, as reported in Table 1 (indeed, theycan scale exponentially). This reflects to cond( G ) and cond( H ), which grow up to and for D Jenga in thecase k = 3 . Nonetheless, the discrepancy of the projectors identities remains below − , which is not far from what11appened with D Maze and D Star . Dataset D Jenga exhibits an almost perfect convergence rate, even though L and H errors are bigger in magnitude than the ones measured for hybrid datasets; D Slices shows even bigger errors and anon-optimal convergence rate, and D Ulike is the dataset with the poorest performance, but still converges at a decentrate for k > . Fig. 5. L -norm and H -seminorm of the approximation errors of the reference, hybrid and mirroring datasets for k = 1 , , . In the setting of “Multiple mirroring datasets”, all datasets diverge badly (see Fig. 6), and this is principally due tovery poor conditioning in the matrices involved in the calculations (see Table 2). Dataset D Jenga4 and D Slices4 maintaina similar trend to the ones in Fig. 5 until numerical problems cause cond( G ) and cond( H ) to explode up to over for D Jenga4 and for D Slices4 . In these conditions, projection matrices Π ∇ k and Π k become meaningless and themethod diverges. The situation slightly improves for D Ulike4 : cond( H ) is still , but the discrepancy of Π ∇ k and Π k remain acceptable. As a result, D Ulike4 does not properly explode, but the approximation error and the convergencerate are much worse than those seen in Fig. 5.
Table 2Summary of numerical performance for all datasets. We report the log of the original values for the condition number of G and H and thediscrepancy of projection matrices Π ∇ k and Π k . Note that for k < we have Π k = Π ∇ k . dataset D Triangle D Maze D Star D Jenga D Slices D Ulike D Jenga4 D Slices4 D Ulike4 k G ) 0 2 5 2 3 6 1 3 6 1 5 10 2 4 6 1 4 7 6 18 31 6 8 10 2 6 11cond( H ) 2 5 7 2 5 8 3 6 9 4 9 14 2 8 10 3 7 10 13 26 39 2 15 18 5 10 16 | Π ∇ k D − I | -13 -11 -9 -12 -10 -8 -12 -10 -8 -12 -8 -5 -12 -10 -9 -13 -10 -8 -9 3 13 -8 -6 -5 -13 -8 -5 | Π k D − I | -10 -8 -7 -5 -5 -7 20 8 -4 ig. 6. L -norm and H -seminorm of the approximation errors of the reference and multiple mirroring datasets for k = 1 , , . As a preliminary conclusion, by simply looking at the previous plots we observe that the relationship is not particu-larly strong between the geometrical assumptions respected by a certain dataset and the performance of the VEM. Infact, we obtained reasonable results with meshes violating several assumptions.
5. Mesh Quality Indicator
We now aim at defining a mesh quality indicator, that is, a scalar function capable of providing insights on thebehaviour of the VEM over a particular sequence of meshes, before actually computing the approximated solutions.5.1.
Definition
We start from the geometrical assumptions defined in Section 3. Even if we proved them not to be strictly necessaryfor the convergence of the method, they can still be good indicators for the general quality of a sequence of meshes.From each geometrical assumption Gi , i = 1 , . . . , , we derived a scalar function (cid:37) i : { E ⊂ Ω h } → [0 , definedelement-wise, which measures how well a polygon E ∈ Ω h meets the requirements of Gi from 0 ( E does not respect Gi ) to 1 ( E fully respects Gi ). 13 ( E ) = k ( E ) | E | = if E is convex ∈ (0 , if E is concave and star-shaped if E is not star-shaped (28) (cid:37) ( E ) = min( (cid:112) | E | , min e ∈ ∂E | e | )max( (cid:112) | E | , h E ) (29) (cid:37) ( E ) = 3 { e ∈ ∂E } = (cid:40) if E is a triangle ∈ (0 , otherwise (30) (cid:37) ( E ) = min i min e ∈I iE | e | max e ∈I iE | e | (31)The operator k ( E ) in (cid:37) measures the area of the kernel of a polygon E , defined as the set of points in E from whichthe whole polygon is visible. Therefore, (cid:37) ( E ) can be interpreted as an estimate of the value of the constant ρ fromassumption G1 on the polygon E . Similarly, the function (cid:37) returns an estimate of the constant ρ introduced in G2 ,expressed trough the ratio | e | /h E , with the insertion of the quantity (cid:112) | E | in order to avoid pathological situations.Function (cid:37) is a simple counter of the number of edges of a polygon, which penalizes elements with numerous edgesas required by G3 . Last, we recall from Section 3 that the boundary of a polygon E can be considered as a one-dimensional mesh I E , which can be subdivided into a number of disjoint sub-meshes I E , . . . , I NE , each one containingpossibly more than one edge of E . In practice, we consider as a sub-mesh the collection of all edges whose verticeslie on the same line. For example, as shown in Fig. 7, the boundary of the top bar E in the base mesh of D Jenga isrepresented by a mesh I E = {I E , I E , I E , I E } , where I E , I E and I E contain, respectively, the left, top and rightedge of E , while I E contains all the aligned edges in the bottom of E . Function (cid:37) returns the minimum ratio betweenthe smallest and the largest element in every I E , that is a measure of the quasi-uniformity of I E imposed by G4 . Fig. 7. One-dimensional mesh I E = {I E , I E , I E , I E } for the top bar E of a D Jenga base mesh.
Combining together (cid:37) , (cid:37) , (cid:37) and (cid:37) , we define a global function (cid:37) : { Ω h } h → [0 , which measures the overallquality of a mesh Ω h . Given a dataset D , we can study the behaviour of (cid:37) (Ω h ) for Ω h ∈ D and determine the qualityof the dataset through the refinement process. In particular, we chose the formula (cid:37) (cid:37) + (cid:37) (cid:37) + (cid:37) (cid:37) as it reflectsthe way in which the relative assumptions are typically imposed: G1 and G2 , G1 and G3 or G1 and G4 (but not, forinstance, G2 and G3 simultaneously): (cid:37) (Ω h ) = (cid:115) { E ∈ Ω h } (cid:88) E ∈ Ω (cid:37) ( E ) (cid:37) ( E ) + (cid:37) ( E ) (cid:37) ( E ) + (cid:37) ( E ) (cid:37) ( E )3 . (32)We have (cid:37) (Ω h ) = 1 if and only if Ω h is made only of equilateral triangles, (cid:37) (Ω h ) = 0 if and only if Ω h is made onlyof non star-shaped polygons, and < (cid:37) (Ω h ) < otherwise. All indicators (cid:37) , (cid:37) , (cid:37) and (cid:37) , and consequently (cid:37) , onlydepend on the geometrical properties of the mesh elements; therefore their values can be computed before applyingthe VEM, or any other numerical scheme.We point out that this approach is easily upgradeable to future developments: whenever new assumptions on thefeatures of a mesh should come up, one simply needs to introduce in our framework a new function (cid:37) i that measuresthe violation of the new assumption and insert it into the formulation of the general indicator (cid:37) in equation (32).14.2. Results
We evaluated the indicator (cid:37) over the datasets defined for this work; results are shown in Fig. 8. If we compare(a) (b)
Fig. 8. Indicator (cid:37) for all datasets.
Fig. 8(a) and 8(b) with Fig. 5 and 6 respectively, we can look for a correspondence between the behaviour of (cid:37) on adataset, computed before solving the problem, and the approximation error actually produced by that dataset. Clearly,as (cid:37) does not depend on the polynomial degree k nor on the type of norm used, we will compare it to an average ofthe plots for the different k values and for the different norms ( L and H ).We preliminarily observe that, for an ideal dataset made by meshes containing only equilateral triangles, (cid:37) wouldbe constantly equal to 1. We assume this value as a reference for the other datasets: the closer is (cid:37) on a dataset to theline y = 1 , the smaller is the approximation error that we expect that dataset to produce. Similarly, the more negativeis the (cid:37) slope, the worse is the convergence rate that we expect over that dataset.For meshes belonging to D Triangle , (cid:37) is almost constant and very close to 1, thus foreseeing the excellent conver-gence rates and the low errors seen in every sub-figure of Fig. 5. The plots for D Maze and D Star in Fig. 8(a) are closeto D Triangle , hence we expect them to behave similarly. This is confirmed by Fig. 5: D Maze and D Star are almostcoincident and very close to D Triangle until the very last meshes, especially in the L plots.The D Jenga plot in Fig. 8(a) anticipates a perfect convergence rate but greater error values with respect to theprevious three, and again this behaviour is respected in Fig. 5. The curve relative to D Slices in Fig. 8(a) is quite distantfrom the ideal value of 1. Importantly, it keeps decreasing from mesh to mesh, even if the plot allows us to assume thatit may flatten within a couple more meshes. Looking at Fig. 5, we notice that this dataset produces an error significantlyhigher than the previous ones ( D Triangle , D Maze , D Star , D Jenga ), and in some cases the H error convergence rate issignificantly lower than the theoretical estimate. Last, the (cid:37) values in Fig. 8(a) predict huge errors and a completelywrong convergence rate for D Ulike . This dataset is actually the one with the worst performance in Fig. 5, where it doesnot even always converge (see the case k = 1 , H seminorm).As far as multiply refined datasets are concerned, we notice that, since it only depends on the geometry of theelements, (cid:37) is not affected by numerical errors. The (cid:37) plot for D Jenga4 in Fig. 8(b) is very similar to the plot obtainedfor D Jenga in 8(a), therefore we should expect D Jenga4 in Fig. 6 to perform similarly to D Jenga in Fig. 5. This isactually the case at least until the last mesh for k = 3 , when numerical problems appear which (cid:37) is not able to predict.Also dataset D Slices4 has a similar trend to D Slices but decreases faster, reaching a (cid:37) value of ∼ . instead of ∼ . within a smaller number of meshes. As above, D Slices4 performs similarly to D Slices until condition numbers explode,in the last two meshes for every value of k . Last, the (cid:37) plot of D Ulike4 is significantly worse than the one of D Ulike (and than any other), both in terms of distance from y = 1 and slope. In Fig. 6 we can observe how, even if D Ulike4 does not properly explode (as it suffers less from numerical problems, cf. Table 2), the approximation error and theconvergence rate are the worse among all the considered datasets.Summing up these results, we conclude that indicator (cid:37) is able, up to a certain accuracy, to predict the behaviour ofthe VEM over the considered datasets, both in terms of error magnitude and convergence rate. The prediction may beinaccurate in presence of very similar performance (the case of D Maze and D Star ), or in extreme situations in which15he numerical problems become so significant to overcome any influence that the geometrical features of the meshcould have on the performance (the last meshes of D Jenga4 and D Slices4 ).
6. Conclusions
In this work, we collected the regularity assumptions that are used in the literature to guarantee the convergence andthe error estimates in the L and H norms for the VEM. These conditions allow a great flexibility for the type andvariety of polygons to be used in a mesh, but they still seem overestimated. Experimentally, we verified that the VEMworks, with a good convergence rate, also on meshes and datasets that strongly violate these assumptions. We alsobuilt examples of datasets for which, violating significantly the regularity assumptions, the VEM shows a convergencerate suboptimal or diverges. Finally, we introduced new indicators to represent how much the regularity hypothesis areviolated by a tessellation and combined these indicators in a single score, aimed at estimating how a dataset can beexpected to be performing in the solution of the VEM. The results obtained are encouraging, showing a satisfactorycorrelation between the errors and this indicator. Consequently, our approach provides an experimental score that isable to predict if a tessellation of a domain can be critical for the VEM.As possible future developments, we are interested in refining the regularity indicator here proposed, for example,to deduce new decomposition rules of a domain with possible applications to mesh generators, or to adaptive coars-ening/refinement algorithms. We are also experimenting similar indicators to evaluate the properties of polyhedralmeshes. Appendix A. Main convergence results in the literature of the VEM
This appendix is a short overview of the main results on convergence analysis from the VEM literature. For eachselected paper, we report (where available) results for abstract energy error, H error estimate and L error estimate,highlighting the geometrical assumptions considered. We may have changed the notation in a few points and intro-duced some very minor modifications in the theorem statements for consistency with our paper.A.1. “Basic Principles of Virtual Elements Methods” [6] This work is the very first paper about the VEM, where this method was introduced. The original formulation adoptsthe regular conforming virtual element space, which we still denote by V hk as in (7) and (8) with a small abuse ofnotation: V hk := { v h ∈ H (Ω) : v h | E ∈ V hk ( E ) ∀ E ∈ Ω h } , (A.1a)where V hk ( E ) := { v h ∈ H ( E ) : v h | ∂E ∈ C ( ∂E ) , v h | e ∈ P k ( e ) ∀ e ∈ ∂E, ∆ v h ∈ P k − ( E ) } , (A.1b)and the dofi-dofi formulation S E, dd h defined in (20) is introduced for the stabilization bilinear form.Although not explicitly used to derive the following abstract result for the convergence in the energy norm, the Authorsintroduce the mesh regularity assumptions G1 and G2 and the concept of simple polygon , which is a connectedpolygonal element with a nonintersecting boundary made of straight edges. This setting is the general and widelyadopted framework of the virtual element formulation in many successive papers. Moreover, a broken H -seminormis introduced, for functions v ∈ H (Ω h ) : | v | h, := (cid:32) (cid:88) E ∈ Ω h |∇ v | ,E (cid:33) / . (A.2) Theorem A.1 (abstract energy error)
Under the k-consistency and stability assumptions defined in Section 2, cf. (21) and (22) , the discrete problem has a unique solution u h . Moreover, for every approximation u I ∈ V hk of u andevery approximation u π of u that is piecewise in P k (Ω h ) , we have | u − u h | , Ω ≤ C ( | u − u I | , Ω + | u − u π | h, + F h ) , (A.3) where C is a constant depending only on α ∗ and α ∗ (the constants in (22) ), and, for any h , F h = | f − f h | V hk (cid:48) is thesmallest constant such that ( f, v ) − (cid:104) f h − f, v (cid:105) ≤ F h | f | ∀ v ∈ V hk . L error estimate of the convergence rate can be derived with the usual duality argumenttechniques.A.2. “Equivalent projectors for virtual element methods” [2] In this paper V hk ( E ) is replaced by the enhanced VEM space (8) adopted in our work (in the paper it is called “modifiedVEM space”) and the dofi-dofi stabilization is adopted. Under the geometrical assumptions G1 and G2 , H and L error estimates are provided; while for the abstract energy error, Theorem A.1 is reported. Theorem A.2 ( H error estimate) Assuming G1 , G2 , let the right-hand side f belong to H k − (Ω) , and that theexact solution u belong to H k +1 (Ω) . Then || u − u h || , Ω ≤ C | h | k | u | k +1 , Ω (A.4) with C a positive constant independent of h . Theorem A.3 ( L error estimate) Assuming G1 , G2 and with Ω convex, let the right-hand side f belong to H k (Ω) ,and that the exact solution u belong to H k +1 (Ω) . Then || u − u h || , Ω + | h ||| u − u h || , Ω ≤ C | h | k +1 | u | k +1 , Ω , (A.5) with C a constant independent of h . A.3. “Stability analysis for the virtual element method” [17]
This paper is based on the regular conforming VEM space (A.1) defined in [6]. A new abstract energy error estimateis deduced, and the H error is studied considering two different stabilization techniques. The Authors also introducenew analytical assumptions on the bilinear form a h ( · , · ) , replacing (22): a Eh ( v h , v h ) ≤ C ( E ) ||| v h ||| E , for all v h ∈ V hk ( E ); (A.6a) ||| q ||| E ≤ C ( E ) a Eh ( q, q ) , for all q ∈ P k ( E ) , (A.6b)being ||| · ||| E a discrete semi-norm induced by the stability term and C ( E ) , C ( E ) positive constants which dependon the shape and possibly on the size of E . Differently than the standard analysis of [6] where a kind of bound (A.6)(b)is assumed for every v h ∈ V hk ( E ) , here the estimate is only required for the polynomials q ∈ P k ( E ) . Thus, even when C ( E ) and C ( E ) can be chosen independent of E , on V hk ( E ) the semi-norm induced by the stabilization term maybe stronger than the energy a Eh ( · , · ) / .For the following theorem, from the constants in (A.6) the Authors derive the quantities: ˜ C ( h ) = max E ∈ Ω h { , C ( E ) } , C ( h ) = max E ∈ Ω h { C ( E ) } , C ∗ ( h ) = 12 max E ∈ Ω h { min { , C ( E ) − }} , Theorem A.4 (abstract energy error)
Under the stability assumptions (A.6) , let the continuous solution u of theproblem satisfy u | E ∈ V E for all E ∈ Ω h , where V E ⊆ V hk ( E ) is a subspace of sufficiently regular functions. Then,for every u I ∈ V hk and for every u π such that u π | E ∈ P k ( E ) , the discrete solution u h satisfies | u − u h | , Ω (cid:46) C err ( h ) ( F h + ||| u − u I ||| + ||| u − u π ||| + | u − u I | , Ω + | u − u π | h, ) , (A.7) where the constant C err ( h ) is given by C err ( h ) = max (cid:110) , ˜ C ( h ) C ( h ) , ˜ C ( h ) / (cid:112) C ∗ ( h ) C ( h ) (cid:111) . The Authors consider the stability term S Eh ( · , · ) as the sum of two contributions: the first, S ∂Eh , involving the boundarydegrees of freedom; the second, S oEh , involving the internal degrees of freedom. It can be shown that, for the followingresults, we can restrict the analysis to S ∂Eh , which can be expressed in the dofi-dofi form S ∂E, dd h already defined in(20), or in the trace form proposed in [44]: S ∂E, tr h ( v h , w h ) = h E (cid:90) ∂E ∂ s v h ∂ s w h ds, (A.8)where ∂ s v h denotes the tangential derivative of v h along ∂E .17 heorem A.5 ( H error estimate with dofi-dofi stabilization) Assuming
G1, G3 , let u ∈ H s (Ω) , s > , be thesolution of the problem with S Eh = S ∂E, dd h . Let u h be the solution of the discrete problem, then it holds || u − u h || , Ω (cid:46) C ( h ) h s − | u | s, Ω < s ≤ k + 1 , (A.9) with C ( h ) = max E ∈ Ω h (log(1 + h E /h m ( E ))) , where h m ( E ) denotes the length of the smallest edge of E . Corollary A.1
Assuming G1 and G2 instead, then c ( h ) (cid:46) and therefore || u − u h || , Ω (cid:46) h s − | u | s, Ω < s ≤ k + 1 . Theorem A.6 ( H error estimate with trace stabilization) Under Assumption G1 , let u ∈ H s (Ω) , s > / be thesolution of the problem with S Eh = S ∂E, tr h . Let u h be the solution of the discrete problem, then it holds || u − u h || , Ω (cid:46) h s − | u | s, Ω / < s ≤ k + 1 . (A.10)A.4. “Some Estimates for Virtual Element Methods” [21] In this paper, the enhanced VEM space is defined in a slightly different (but still equivalent) formulation from (8): V hk ( E ) := (cid:8) v h ∈ H ( E ) : v h | ∂E ∈ P k ( ∂E ) , ∃ q v h (= − ∆ v h ) ∈ P k ( E ) such that (cid:90) E ∇ v h · ∇ w h d x = (cid:90) E q v h w h d x ∀ w h ∈ H ( E ) , and Π ,Ek v h − Π ∇ ,Ek v h ∈ P k − ( E ) (cid:9) . (A.11)Different stabilization types are considered, but the convergence results in this case do not depend on the choice of S Eh . The geometrical assumptions required throughout the article are G1 and G2 . Theorem A.7 (abstract energy error)
Assuming
G1, G2 , if f ∈ H s − (Ω) for ≤ s ≤ k , then there exists a positiveconstant C depending only on k and ρ from G1 such that | u − u h | , Ω ≤ C ( inf v ∈ V hk | u − v | , Ω + inf w ∈ P k (Ω h ) | u − w | h, + h s | f | s − , Ω ) . (A.12) Theorem A.8 ( H error estimate) Assuming
G1, G2 , if u ∈ H s +1 (Ω) for ≤ s ≤ k , then there exists positiveconstants C , C depending only on k and ρ from G1 such that | u − u h | , Ω + | u − Π ∇ k u h | h, ≤ C h s | u | s +1 , Ω . (A.13) Theorem A.9 ( L error estimate) Assuming
G1, G2 , with Ω convex, if u ∈ H s +1 (Ω) for for ≤ s ≤ k , then thereexists a positive constant C depending only on Ω , k and ρ from G1 such that || u − u h || , Ω ≤ Ch s +1 | u | s +1 , Ω . (A.14)A.5. “Virtual element methods on meshes with small edges or faces” [22] The Authors establish error estimates for virtual element methods on polygonal or polyhedral meshes that can containsmall edges ( d = 2) or small faces ( d = 3) . The VEM space is the enhanced space formulated as in (A.11), and thelocal stabilizing bilinear form is considered in the dofi-dofi formulation S E, dd h and in the trace formulation S E, tr h of(A.8). Also, the following constants are defined: H := sup E ∈ Ω h (cid:18) max e ∈ ∂E h e min e ∈ ∂E h e (cid:19) , α h := (cid:40) ln (1 + H ) with S E, dd h with S E, tr h (A.15)The geometrical assumptions required throughout the article are G1 and G3 . The Authors introduce a mesh-dependentenergy norm || · || h := (cid:112) a h ( · , · ) and a functional Ξ h : V hk → P k (Ω h ) given by Ξ h = (cid:40) Π if k = 1 , k − if k ≥ . (A.16)18 heorem A.10 (abstract energy error) Assuming
G1, G3 , let u and u h be the solutions of the continuous and dis-crete problems. We have: || u − u h || h (cid:46) inf w ∈ V hk || u − w || h + || u − Π ∇ k u || h + √ α h (cid:32) || u − Π ∇ k u || h, + sup w ∈ V hk ( f, w − Ξ h w ) | w | , Ω (cid:33) . (A.17) Theorem A.11 ( H error estimate) Assuming
G1, G3 , if the solution u belongs to H s +1 (Ω) for some ≤ s ≤ k ,we have: || u − u h || h (cid:46) √ α h h s | u | s +1 , Ω , and (A.18) | u − u h | , Ω + √ α h (cid:2) | u − Π ∇ k u h | h, + | u − Π k u | h, (cid:3) (cid:46) α h h s | u | s +1 , Ω . (A.19) Theorem A.12 ( L error estimate) Assuming
G1, G3 , if the solution u belongs to H s +1 (Ω) for some ≤ s ≤ k , we have: || u − u h || , Ω ≤ C α h h s +1 | u | s +1 , Ω . (A.20)The notation A (cid:46) B indicates that A ≤ CB , with a positive constant C depending on the mesh regularity parameter ρ of G1 and the degree k in the case of S E, tr h , and also on yhe maximum number of edges N of G3 in the case of S E, dd h .A.6. “Sharper error estimates for Virtual Elements and a bubble-enriched version” [16] In this paper, it is shown that the H interpolation error | u − u I | ,E on each element E can be split into a boundarycontribution and a bulk contribution. The idea is to decouple the polynomial order on the boundary and in the bulk ofthe element. Let k o and k ∂ be two positive integers with k o ≥ k ∂ and let k = ( k o , k ∂ ) . For any E ∈ Ω h the Authorsdefine the generalized virtual element space : V h k := { v h ∈ H (Ω) : v h | E ∈ ) V h k ( E ) , ∀ E ∈ Ω h } , (A.21a)where V h k ( E ) := { v h ∈ H ( E ) : v h | ∂E ∈ C ( ∂E ) , v h | e ∈ P k ∂ ( e ) ∀ e ∈ ∂E, ∆ v h ∈ P k o − ( E ) } . (A.21b)For k o = k ∂ , the space V h k ( E ) coincides with the regular virtual element space in (A.1). Moreover, given a function v ∈ H ∩ H s (Ω h ) , on each element E ∈ Ω h the Authors define the interpolant function I h v as the solution of thefollowing elliptic problem: (cid:40) ∆ I h v = Π ,Ek o − ∆ v in E I h v = v b on ∂E, where v b is the standard 1D piecewise polynomial interpolation of v | ∂E . Theorem A.13 (abstract energy error)
Under Assumption G1 , let u ∈ H (Ω h ) ∩ H s (Ω h ) with s > be the solutionof the continuous problem and u h ∈ V h k be the solution of the discrete problem. Consider the functions e h = u h − I h u, e I = u − I h u, e π = u − u π , e u = u π − I h u, where u π ∈ P k o (Ω h ) is the piecewise polynomial approximation of u defined in Bramble-Hilbert Lemma. Then itholds that | u − u h | , Ω + α a h ( e h , e h ) (cid:46) α (cid:88) E ∈ Ω h h E || f − f h || ,E + α | e π | , Ω h + α | e I | , Ω + α (cid:88) E ∈ Ω h σ E (A.22) where α is the coercivity constant and σ E := S Eh (( I − Π ∇ ,Ek ) e u , ( I − Π ∇ ,Ek ) e u ) . Theorem A.14 ( H error estimate with dofi-dofi stabilization) Assuming
G1, G4 , let u ∈ H (Ω h ) be the solutionof the continuous problem and u h ∈ V h k be the solution of the discrete problem obtained with the dofi-dofi stabilization.Assume moreover that u ∈ H ¯ k (Ω h ) with ¯ k = max { k o + 1 , k ∂ + 2 } and f ∈ H k o − . Then it holds that | u − u h | , Ω (cid:46) α (cid:88) E ∈ Ω h (cid:16) ( α + N E ) / h k o E + h k ∂ ∂E (cid:17) , (A.23) where h ∂E denotes the maximum edge length, α is the constant defined in (A.15) , and N E is the number of edges in E . heorem A.15 ( H error estimate with trace stabilization) Under Assumption G1 , let u ∈ H (Ω h ) be the solutionof the continuous problem and u h ∈ V h k be the solution of the discrete problem obtained with the trace stabilization.Assume moreover that u ∈ H ¯ k (Ω h ) with ¯ k = max { k o + 1 , k ∂ + 2 } and f ∈ H k o − . Then it holds that | u − u h | , Ω (cid:46) (cid:88) E ∈ Ω h (cid:16) h k o E + h k ∂ ∂E (cid:17) . (A.24) Appendix B. Dataset generation
In this appendix, we take a closer look at how the datasets presented in Section 4.1 are built. All algorithms havebeen written using CinoLib [35]. We recall that a dataset is a finite mesh sequence D = { Ω n } n =0 ,...,N , ordereddecreasingly with respect to the mesh size. We also recall the definition of the quantities: A n = max E ∈ Ω n | E | min E ∈ Ω n | E | and e n = max e ∈ Ω n | e | min e ∈ Ω n | e | , for n = 0 , . . . , N. B.1.
Reference dataset
The first dataset, D Triangle , contains only triangular meshes that are built by inserting a number of points in thedomain, and connecting them in a Delaunay triangulation. The point set is defined through the
Poisson Disk Sampling algorithm proposed in [24], empirically adjusting the distance between points (called radius in the original paper) inorder to generate meshes with the desired number of vertices. Points are then connected in a Delaunay triangulationusing the well known
Triangle library [43], with the default parameters configuration.In D Triangle , A n and e n are almost constant, as no constraints are imposed to the triangulation process.B.2. Hybrid datasets
The construction of hybrid datasets is characterized by the insertion in Ω of one or more polygonal elements, andby a tessellation algorithm. Each hybrid dataset is built around (and named after) an initial polygon E = E ( t n ) depending on a deformation parameter t n ∈ [0 , , which is used to deform E . This parameter directly depends onthe mesh number (i.e. t n → as n → N ), and it can be adjusted to improve or worsen the quality of the polygon E (the higher, the worse).At refinement step n , mesh Ω n is created by inserting a number of identical copies of the deformed polygon E ( t n ) (opportunely resized) in the domain Ω , and tessellating the rest of Ω using the Triangle library. This procedure isdetailed in Algorithm 1.Note that, a whole family of other datasets may be generated by simply defining a new initial polygon. More examplescan be found in [5].The initial polygon E ( t n ) for dataset D Maze is the 10-sided element shown in Fig. B.1, with vertices (0 , , (0 , , (1 , , (1 , . , (0 . , . , (cid:18) . , . t n (cid:19) , (cid:18) .
75 + t n , . t n (cid:19) , (cid:18) .
75 + t n , . − t n (cid:19) , (cid:18) . − t n , . − t n (cid:19) , (cid:18) . − t n , (cid:19) . As t n → , the length of the shortest edge (the one with vertices (0 , and (0 . − t n / , ) goes to zero, and so doesthe area of E ( t n ) . Fig. B.1. Initial polygons E ( t ) , E ( t ) , E ( t ) and E ( t ) from dataset D Maze . E ( t n ) of dataset D Star (Fig. B.2), we first build a ¯ i -sided regular polygon, with ¯ i = 8(1 + (cid:98) t n (cid:99) ) and vertices (cid:40) v = (1 , ,v i = σ ( v i − ) , for i = 1 , . . . , ¯ i, being σ ( v ) the rotation centered at (0 , of vertex v by an angle of π/ ¯ i . Then we project every odd-indexed vertextowards the barycenter of E ( t n ) : v (cid:48) j +1 = s v j +1 , for j = 0 , . . . , ¯ i − ,where the projection factor s ∈ (0 , is gradually decreased until the angles at the even-indexed vertices becomesmaller than (1 − t n ) π/ .As t n → we have an increasing number of edges (from 8 to almost 90), the minimum angle and the area decrease tozero while the length of every edge increases. Fig. B.2. Initial polygons E ( t ) , E ( t ) , E ( t ) and E ( t ) from dataset D Star . Once we defined the initial polygon E ( t n ) , we can build the corresponding dataset through Algorithm 1. We havesome initial parameters, which are set a priori and remain untouched: the number of meshes in the dataset N , the areaof the initial polygon at the first step d and the deformation range T = [ t min , t max ] . In this work we set N = 10 , d = 0 . , which corresponds to of the domain, and T = [0 , . .Then we have three main parameters, e n ∈ N , t n ∈ T and d n ∈ (0 , d ) , which respectively regulate the number ofinitial polygons inserted, the deformation of these polygons and their area. In particular, e n increases inversely to d n ( Ω n +1 has twice as polygons as Ω n , with halved areas), so that the percentage of the domain covered by polygons (nottriangles) is preserved all across the dataset. Due to the complicated shapes of some initial polygons, it may be hard toask for exactly | E ( t n ) | = d n , therefore we only impose | E ( t n ) | ≤ d n .Several options are possible for setting e n , t n and d n , and the speeds at which these quantities vary, strongly affectthe geometrical qualities of the meshes in the dataset. In our datasets, e n increases exponentially, t n increases linearlyinside T and d n decreases exponentially. The exponential increase of the number of initial polygons inserted in thedomain may lead to intersections between them, or with the domain boundaries. To avoid this phenomenon, we inserteda while loop in Algorithm 1 which decreases d n until no intersections occur: this ensures stability to the algorithm, butin practice it activates only for very dense meshes and it typically runs only few iterations.Last, when all polygons have been inserted in Ω , the Triangle algorithm is used to generate a Delaunay triangulation.The already inserted polygons are considered as holes in the domain, and we set no limitations on the number ofSteiner points that may appear in the triangulation process. We adopt the following parameters configuration, cf. [43]:– q: no angles smaller than 20 degrees;– c: enclose the convex hull with segments;– l: use only vertical cuts in the divide-and-conquer algorithm (this switch is primarily of theoretical interest);– a: maximum triangle area constraint, set equal to d n .Due to the freedom left to the Triangle algorithm, it is not possible to estimate A n and e n precisely for hybrid datasets;hence, the relative values reported in Table 1 have been measured a posteriori.B.3. Mirroring datasets
The construction of D Jenga , D Slices and D Ulike , at every step n ≥ , consists in a first algorithm for iterativelygenerating a base mesh (cid:98) Ω n from the previous base mesh (cid:98) Ω n − , followed by a mirroring technique which returnsthe computational mesh Ω n . The base mesh generation algorithm is different for each dataset (Algorithms 2, 3 and4), while the mirroring algorithm (Algorithm 5) is common to all three datasets. Algorithms 2, 3 and 4 depend ontwo initial parameters: N indicates the number of meshes in the dataset and N el indicates the number of elements toinsert at each step. For mirroring datasets we set N el = 1 , while for multiple mirroring datasets (described in the nextsection) we set N el = 4 . 21 lgorithm 1 hybrid datasets generation define the initial polygon set the initial parameters N , d , and T for n = 0 , . . . , N do set the main parameters: e n = 2 n , t n = n t max − t min N , d n = d / n use Poisson Disk Sampling with radius r = 1 / √ e n to find a set of e n points { p i } i =1 ,...,e n in Ω generate polygon E ( t n ) with | E ( t n ) | ≤ d n insert a copy of E ( t n ) centered around every p i while polygon E ( t n ) intersects with other polygons or with the boundary of Ω do d n ← d n − (cid:15) generate a polygon E ( t n ) with | E ( t n ) | ≤ d n insert a copy of E ( t n ) centered around every p i end while use Triangle to generate the Delaunay triangulation Ω n of Ω , considering polygons E ( t n ) as holes add Ω n to D end for In the D Jenga base mesh shown in Fig. B.3 we have a top bar , a bottom bar and a right square which are fixedindependently of n , and n + 1 rectangles in the left part of the domain. At each refinement step n ≥ , a new rect-angular element is created by splitting in two equal parts the leftmost rectangular element in the previous base mesh,and consequently updating the top and the bottom bars with new vertices and edges. Therefore, all elements in (cid:98) Ω n ,except for the top and the bottom bars, are rectangles with height equal to / and basis ranging from / to / n +1 .Once that the base mesh (cid:98) Ω n is generated, the mirroring algorithm is recursively applied for n times to generate thecomputational mesh Ω n , as described in Algorithm 2.When computing A n and e n , we can restrict our calculations to the base mesh, because these ratios are not affected by Fig. B.3. Non-mirrored base meshes (cid:98) Ω , (cid:98) Ω , (cid:98) Ω , (cid:98) Ω from datasets D Jenga . the mirroring algorithm. In particular, the longest edge in the base mesh is the upper edge of the top bar, which is neversplit, while the shortest edge is the basis of the leftmost rectangle, which halves at each step: this causes e n ∼ n .The top bar is also the element with the greatest area (together with the bottom bar and the right square), which isconstantly equal to 1/4, while the leftmost rectangle has area / ∗ / n +1 = 1 / n +2 , therefore A n ∼ n .In the D Slices base meshes shown in Fig. B.4, at each step n ≥ , we add the vertices with coordinates (2 − i , − − i ) and (1 − − i , − i ) for i = 1 , . . . , n + 2 , and we connect them to the vertices (0 , and (1 , . As a result, at eachiteration we create a couple of new polygons, called upper slice and lower slice , symmetrical with respect to thediagonal, and we add them to the base mesh.The area of the two inner triangles (the biggest polygons in the base mesh) is constantly equal to / . For evaluating Fig. B.4. Non-mirrored base meshes (cid:98) Ω , (cid:98) Ω , (cid:98) Ω , (cid:98) Ω from datasets D Slices . the area of the two most external polygons, we consider them as the union of the two identical triangles obtained bysplitting the polygons along the diagonal (the one connecting the vertices with coordinates (0 , and (1 , ). Then the22 lgorithm 2 D Jenga dataset generation set the number of meshes N and the number of elements N el for n = 0 , . . . , N do top bar = { (0 , . , (1 , . , (1 , , (0 , } bottom bar = { (0 , , (1 , , (1 , . , (0 , . } right square = { (0 . , . , (1 , . , (1 , . , (0 . , . } vector b = sample n ∗ N el equally spaced points inside interval (0 , . for i = 1 , . . . , size ( b ) do rectangles [ i ] = { ( b [ i − , . , ( b [ i ] , . , ( b [ i ] , . , ( b [ i − , . } insert vertex ( b [ i ] , . in top bar insert vertex ( b [ i ] , . in bottom bar end for generate mesh (cid:98) Ω n = { top bar, bottom bar, right square, rectangles } for i = 1 , . . . , n do (cid:98) Ω n = mirror mesh ( (cid:98) Ω n ) end for add the newly generated Ω n = (cid:98) Ω n to D Jenga end for smallest area in the base mesh is the sum of the areas of two equal triangles with basis √ / and height − n / √ , andsimple calculations lead to A n ∼ n . Last, we notice that all the edges in the base mesh have lengths between 1 and √ , because no edge is ever split, hence e n ∼ c . Algorithm 3 D Slices dataset generation set the number of meshes N and the number of elements N el for n = 0 , . . . , N do vector b = [2 − , − , . . . , − n ∗ N el ] for i = 1 , . . . , size ( b ) do upper slices [ i ] = { (0 , , ( b [ i ] , − b [ i ]) , (1 , , ( b [ i + 1] , − b [ i + 1]) } lower slices [ i ] = { (0 , , (1 − b [ i ] , b [ i ]) , (1 , , (1 − b [ i + 1] , b [ i + 1]) } end for generate mesh (cid:98) Ω n = { upper slices, lower slices } for i = 1 , . . . , n do (cid:98) Ω n = mirror mesh ( (cid:98) Ω n ) end for add the newly generated Ω n = (cid:98) Ω n to D Slices end for
In the D Ulike base meshes shown in Fig. B.5, at each step n ≥ we insert n U -shaped continuous polylines insidethe domain. We have an internal rectangle and a sequence of concentric equispaced U-like polygons culminating withthe external U . This last element is not different from the other U -like polygons, but is created separately, because weneed to split its lower edge in order to match the base mesh that will appear below it during the mirroring algorithm.In every base mesh, the shortest edge is the one corresponding to the width of each U -like polygon, which measures Fig. B.5. Non-mirrored base meshes (cid:98) Ω , (cid:98) Ω , (cid:98) Ω , (cid:98) Ω from datasets D Ulike . − ( n +1) , and the longest edges are the left and right boundaries of the domain. This causes e n ∼ n . Said e the shortestedge, the smallest area is the one of the internal rectangle, equal to e (1 / e ) , and the biggest area is the one relativeto the external U , equal to e − e . We have 23 n = 3 − e e = 3 − − ( n +1) )1 + 2(2 − ( n +1) ) = 3 − − n − n ∼ c. Algorithm 4 D Ulike dataset generation set the number of meshes N and the number of elements N el for n = 0 , . . . , N do vector b = sample n ∗ N el equally spaced points inside interval (0 , . for i = 1 , . . . , size ( b ) do U-like polygons [ i ] = { ( b [ i ] , , ( b [ i ] , b [ i ]) , (1 − b [ i ] , b [ i ]) , (1 − b [ i ] , , (1 − b [ i + 1] , , (1 − b [ i + 1] , b [ i +1]) , ( b [ i + 1] , b [ i + 1]) , ( b [ i + 1] , } end for internal rectangle = { ( b [ end ] , , ( b [ end ] , b [ end ]) , (1 − b [ end ] , b [ end ]) , (1 − b [ end ] , } external U = { (0 , , (0 , , (1 , , (1 , , (1 − b [0] , , (1 − b [0] , b [0]) , ( b [0] , b [0]) , ( b [0] , } for b ∈ b do insert vertices ( b, and (1 − b, in external U end for generate mesh (cid:98) Ω n = { external U, U-like polygons, internal rectangle } for i = 1 , . . . , n do (cid:98) Ω n = mirror mesh ( (cid:98) Ω n ) end for add the newly generated Ω n = (cid:98) Ω n to D Ulike end for
B.4.
Multiple mirroring datasets
Multiple mirroring datasets are built with the exactly same algorithms of the mirroring datasets, changing the pa-rameter N el . This parameter regulates the number of elements generated in each base mesh of the dataset. In particular,datasets D Jenga4 , D Slices4 and D Ulike4 are defined setting N el = 4 . An example of a multiple mirroring dataset with N el = 4 is shown in Fig. B.6, where the first three base meshes of D Ulike4 are presented.The N el value influences ratios A n and e n : if A n , e n ∼ n for N el = 1 , these quantities become asymptotic to n when N el = 4 , except for the cases in which the ratios were constant (see Table 1). Fig. B.6. Non-mirrored base meshes (cid:98) Ω , (cid:98) Ω and (cid:98) Ω from datasets D Ulike4 . B.5.
The mirroring algorithm
The mirroring algorithm (Algorithm 5) generates four adjacent copies of any polygonal mesh M defined over thedomain Ω = [0 , . In CinoLib [35], a polygonal mesh can be defined by a vector verts containing all its vertices anda vector polys containing all its polygons. The result of the algorithm is therefore a polygonal mesh M (cid:48) , generatedby some vectors new-verts and new-polys , containing four times the number of vertices and polygons of M . Wheniterated a sufficient number of times, this construction allows us to obtain a number of vertices and degrees of freedomin each mesh of the mirroring datasets that is comparable to that of the meshes at the same refinement level in hybriddatasets.Vector new-verts contains all vertices v ∈ verts copied four times and translated by vectors (0 , , (1 , , (1 , and (0 , respectively. The coordinates of all vertices in new-vertices are divided by 2, so that all new points lie in thesame domain as the input mesh. Vector new-polys is simply vector polys repeated four times. A final cleaning stepis required to remove duplicated vertices and edges that may arise in the mirroring process, for example if the initialmesh M has vertices along its boundary. 24 lgorithm 5 mesh mirroring input: base mesh M verts = verts( M ), polys = polys( M ) new-verts = verts for vertex v ∈ verts do insert vertex v + (1 , in new-verts end for for vertex v ∈ verts do insert vertex v + (1 , in new-verts end for for vertex v ∈ verts do insert vertex v + (0 , in new-verts end for for vertex v ∈ new-verts do v ← v/ end for new-polys = [ polys , polys , polys , polys ] M (cid:48) = mesh { new-verts , new-polys } remove duplicated vertices and edges from M (cid:48) Acknowledgements
This paper has been realised in the framework of ERC Project CHANGE, which has received funding from theEuropean Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grantagreement no. 694515). We are very grateful to Dr. L. Mascotto, University of Vienna, for useful suggestions.
References [1] R. A. Adams and J. J. F. Fournier.
Sobolev spaces . Pure and Applied Mathematics. Academic Press, 2 edition, 2003.[2] B. Ahmad, A. Alsaedi, F. Brezzi, L. D. Marini, and A. Russo. Equivalent projectors for virtual element methods.
Computers & Mathematicswith Applications , 66:376–391, September 2013.[3] P. F. Antonietti, L. Beir˜ao da Veiga, S. Scacchi, and M. Verani. A C virtual element method for the Cahn-Hilliard equation with polygonalmeshes. SIAM Journal on Numerical Analysis , 54(1):34–56, 2016.[4] P. F. Antonietti, G. Manzini, and M. Verani. The conforming virtual element method for polyharmonic problems.
Computers & Mathematicswith Applications , 2019. published online: 4 October 2019.[5] M. Attene, S. Biasotti, S. Bertoluzza, D. Cabiddu, M. Livesu, G. Patan`e, M. Pennacchio, D. Prada, and M. Spagnuolo. Benchmark of polygonquality metrics for polytopal element methods, 2019.[6] L. Beir˜ao da Veiga, F. Brezzi, A. Cangiani, G. Manzini, L. D. Marini, and A. Russo. Basic principles of virtual element methods.
MathematicalModels & Methods in Applied Sciences , 23:119–214, 2013.[7] L. Beir˜ao da Veiga, F. Brezzi, L. D. Marini, and A. Russo. The Hitchhiker’s guide to the virtual element method.
Mathematical Models andMethods in Applied Sciences , 24(8):1541–1573, 2014.[8] L. Beir˜ao da Veiga, F. Brezzi, L. D. Marini, and A. Russo. H(div) and H(curl)-conforming VEM.
Numerische Mathematik , 133(2):303–332,2016.[9] L. Beir˜ao da Veiga, F. Brezzi, L. D. Marini, and A. Russo. Virtual element methods for general second order elliptic problems on polygonalmeshes.
Mathematical Models & Methods in Applied Sciences , 26(4):729–750, 2016.[10] L. Beir˜ao da Veiga, A. Chernov, L. Mascotto, and A. Russo. Basic principles of hp virtual elements on quasiuniform meshes.
MathematicalModels & Methods in Applied Sciences , 26(8):1567–1598, 2016.[11] L. Beir˜ao da Veiga, K. Lipnikov, and G. Manzini. Arbitrary order nodal mimetic discretizations of elliptic problems on polygonal meshes.
SIAM Journal on Numerical Analysis , 49(5):1737–1760, 2011.[12] L. Beir˜ao da Veiga, K. Lipnikov, and G. Manzini.
The Mimetic Finite Difference Method , volume 11 of
MS&A. Modeling, Simulations andApplications . Springer, I edition, 2014.[13] L. Beir˜ao da Veiga and G. Manzini. A virtual element method with arbitrary regularity.
IMA Journal on Numerical Analysis , 34(2):782–799,2014. DOI: 10.1093/imanum/drt018, (first published online 2013).[14] L. Beir˜ao da Veiga and G. Manzini. Residual a posteriori error estimation for the virtual element method for elliptic problems.
ESAIM:Mathematical Modelling and Numerical Analysis , 49:577–599, 2015.[15] L. Beir˜ao da Veiga, G. Manzini, and M. Putti. Post-processing of solution and flux for the nodal mimetic finite difference method.
NumericalMethods for Partial Differential Equations , 31(1):336–363, 2015.[16] L. Beir˜ao da Veiga and G. Vacca. Sharper error estimates for virtual elements and a bubble-enriched version. arXiv preprint arXiv:2005.12009 ,2020.[17] L. Beir˜ao da Veiga, C. Lovadina, and A. Russo. Stability analysis for the virtual element method.
Mathematical Models and Methods inApplied Sciences , 27(13):2557–2594, 2017.
18] M. F. Benedetto, S. Berrone, S. Pieraccini, and S. Scial`o. The virtual element method for discrete fracture network simulations.
ComputerMethods in Applied Mechanics and Engineering , 280(0):135 – 156, 2014.[19] E. Benvenuti, A. Chiozzi, G. Manzini, and N. Sukumar. Extended virtual element method for the Laplace problem with singularities anddiscontinuities.
Computer Methods in Applied Mechanics and Engineering , 356:571 – 597, 2019.[20] S. Berrone, S. Pieraccini, S. Scial`o, and F. Vicini. A parallel solver for large scale DFN flow simulations.
SIAM Journal on ScientificComputing , 37(3):C285–C306, 2015.[21] S. C. Brenner, Q. Guan, and L.-Y. Sung. Some estimates for virtual element methods.
Computational Methods in Applied Mathematics ,17(4):553–574, 2017.[22] S. C. Brenner and L.-Y. Sung. Virtual element methods on meshes with small edges or faces.
Mathematical Models and Methods in AppliedSciences , 28(07):1291–1336, 2018.[23] F. Brezzi, A. Buffa, and K. Lipnikov. Mimetic finite differences for elliptic problems.
M2AN Math. Model. Numer. Anal. , 43:277–295, 2009.[24] R. Bridson. Fast Poisson disk sampling in arbitrary dimensions.
SIGGRAPH sketches , 10:1, 2007.[25] A. Cangiani, E. H. Georgoulis, T. Pryer, and O. J. Sutton. A posteriori error estimates for the virtual element method.
Numerische Mathematik ,pages 1–37, 2017.[26] A. Cangiani, V. Gyya, G. Manzini, and Sutton. O. Chapter 14: Virtual element methods for elliptic problems on polygonal meshes. InK. Hormann and N. Sukumar, editors,
Generalized Barycentric Coordinates in Computer Graphics and Computational Mechanics , pages1–20. CRC Press, Taylor & Francis Group, 2017.[27] A. Cangiani, G. Manzini, A. Russo, and N. Sukumar. Hourglass stabilization of the virtual element method.
International Journal on NumericalMethods in Engineering , 102(3-4):404–436, 2015.[28] O. Certik, F. Gardini, G. Manzini, L. Mascotto, and G. Vacca. The p- and hp-versions of the virtual element method for elliptic eigenvalueproblems, 2020.[29] O. Certik, F. Gardini, G. Manzini, and G. Vacca. The virtual element method for eigenvalue problems with potential terms on polytopicmeshes.
Applications of Mathematics , 63(3):333–365, 2018.[30] C. Chinosi and L. D. Marini. Virtual element method for fourth order problems: L2-estimates.
Computers & Mathematics with Applications ,72(8):1959 – 1967, 2016. Finite Elements in Flow Problems 2015.[31] P. G. Ciarlet.
The finite element method for elliptic problems . SIAM, 2002.[32] F. Dassi and L. Mascotto. Exploring high-order three dimensional virtual elements: bases and stabilizations.
Comput. Math. Appl. , 75(9):3379–3401, 2018.[33] D. A. Di Pietro, J. Droniou, and G. Manzini. Discontinuous skeletal gradient discretisation methods on polytopal meshes.
Journal ofComputational Physics , 355:397–425, 2018.[34] K. Lipnikov, G. Manzini, and M. Shashkov. Mimetic finite difference method.
Journal of Computational Physics , 257 – Part B:1163–1227,2014. Review paper.[35] M. Livesu. cinolib: a generic programming header only C++ library for processing polygonal and polyhedral meshes. In
Transactions onComputational Science XXXIV , pages 64–76. Springer, 2019.[36] G. Manzini, K. Lipnikov, J. D. Moulton, and M. Shashkov. Convergence analysis of the mimetic finite difference method for elliptic problemswith staggered discretizations of diffusion coefficients.
SIAM Journal on Numerical Analysis , 55(6):2956–2981, 2017.[37] G. Manzini, A. Russo, and N. Sukumar. New perspectives on polygonal and polyhedral finite element methods.
Mathematical Models &Methods in Applied Sciences , 24(8):1621–1663, 2014.[38] L. Mascotto. Ill-conditioning in the virtual element method: stabilizations and bases.
Numer. Methods Partial Differential Equations ,34(4):1258–1281, 2018.[39] D. Mora, G. Rivera, and R. Rodr´ıguez. A virtual element method for the Steklov eigenvalue problem.
Mathematical Models and Methods inApplied Sciences , 25(08):1421–1445, 2015.[40] G. H. Paulino and A. L. Gain. Bridging art and engineering using Escher-based virtual elements.
Structures and Multidisciplinary Optimization ,51(4):867–883, 2015.[41] I. Perugia, P. Pietra, and A. Russo. A plane wave virtual element method for the Helmholtz problem.
ESAIM: Mathematical Modelling andNumerical Analysis , 50(3):783–808, 2016.[42] L. Ridgway Scott and S. C. Brenner.
The mathematical theory of finite element methods . Texts in applied mathematics 15. Springer-VerlagNew York, 3 edition, 2008.[43] J. R. Shewchuk. Triangle library. , 2005.[44] P. Wriggers, W. T. Rust, and B. D. Reddy. A virtual element method for contact.
Computational Mechanics , 58(6):1039–1050, 2016.[45] M. Zl´amal. On the finite element method.
Numerische Mathematik , 12(5):394–409, 1968., 12(5):394–409, 1968.