Partition of Unity Extension of Functions on Complex Domains
PPartition of Unity Extension of Functions on Complex Domains
Fredrik Fryklund a, ∗ , Erik Lehto a , Anna–Karin Tornberg a a KTH Mathematics, Swedish e–Science Research Centre,100 44 Stockholm, Sweden
Abstract
We introduce an e ffi cient algorithm, called partition of unity extension or PUX, to construct an extension of desiredregularity of a function given on a complex multiply connected domain in 2 D . Function extension plays a fundamentalrole in extending the applicability of boundary integral methods to inhomogeneous partial di ff erential equations withembedded domain techniques. Overlapping partitions are placed along the boundaries, and a local extension of thefunction is computed on each patch using smooth radial basis functions; a trivially parallel process. A partition ofunity method blends the local extrapolations into a global one, where weight functions impose compact support. Theregularity of the extended function can be controlled by the construction of the partition of unity function. We evaluatethe performance of the PUX method in the context of solving the Poisson equation on multiply connected domainsusing a boundary integral method and a spectral solver. With a suitable choice of parameters the error converges as atenth order method down to 10 − . Keywords:
Function extension, Embedded Domain, Boundary Integral Method, Radial Basis Function, Partition ofUnity, Linear Elliptic Partial Di ff erential Equation
1. Introduction
This paper addresses the issue of how to numerically construct an extension of a function defined on a complexdomain. Without prior acquaintance with the topic it could appear simple. However, adding requirements on theglobal regularity of the extended function it becomes a non-trivial task. Furthermore, it is often desirable that the ex-tended function has compact support and that it can be e ffi ciently constructed. One important application for functionextension is to extend the applicability of integral equation methods for solving partial di ff erential equations (PDEs).Integral equation methods have been shown to be both highly accurate and e ffi cient when solving homogeneous con-stant coe ffi cient elliptic partial di ff erential equations in complex geometry. Function extension is a key component ina framework for solving non-homogeneous elliptic PDEs, and furthermore to solve time-dependent equations such asthe heat equation and extending from the solution of Stokes equations to Navier-Stokes equations [1, 2, 3].The idea is to avoid solving the full linear elliptic inhomogeneous PDE, given on a complex domain, by splittingthe problem into two. The right-hand side is extended to a geometrically simpler domain, such as a box, and as part ofthe full solution, a particular solution is computed on this simpler domain. The homongeneous problem is solved onthe original domain with modified boundary conditions, such that the total solution is the sum of the two. This generalidea has been used also for other numerical methods and this group of methods is often referred to as embeddedboundary techniques. For simple geometries an arsenal of powerful solution methods are available, but their accuracyis often limited by the global regularity of the extended function. Several di ff erent function extension methods, usedin this context, have been suggested in recent years [4, 5, 6, 7, 8, 9]. ∗ Corresponding author
Email addresses: [email protected] (Fredrik Fryklund), [email protected] (Erik Lehto), [email protected] (Anna–Karin Tornberg) a r X i v : . [ m a t h . NA ] D ec he approach to function extension suggested by Askham et al. [4] and Stein et al. [6] are both based on the sameidea, but quite di ff erent in implementation. Given f on a domain ¯ Ω , use the vaules of f on the boundary of Ω as theDirichlet data for the external Laplace problem ∆ w = R \ Ω , (1) w = f on ∂ Ω . (2)Then a globally continuous extension f e of f is given by f e ( y ) = f ( y ) , y ∈ Ω , w ( y ) , y ∈ R \ Ω . (3)In [4], this problem is solved by an integral equation based method, whereas in [6] it is coupled with a so called immersed boundary system of equations . This extension will be in C , but will not have compact support. To obtainhigher regularity the biharmonic equation can be solved instead of (1)–(2), or even polyharmonic. Their respectivework show the complexity of the problem and what price is considered reasonable to pay to obtain an extension.Moreover both observe that the accuracy of the solution to the associated PDE relies heavily on the regularity of saidextension f e .Another alternative, given by [9], is to extend the solution to the PDE, instead of the right hand side, and use anactive penalty method. A function extension of global regularity k is created by matching normal derivatives of degree k of the given boundary data. The extension is expressed in a basis that is rapidly decaying with the distance to theboundary. Function extension can also be achieved by Fourier continuation methods: in 1 D the domain of interest isembedded into a larger one and a smooth periodic extension is constructed, which yields an appropriate setting forspectral methods. Dimensional splitting is used for higher dimensional problems. See [10, 11, 12, 13] and the refer-ences therein. The methods and the associated references included above is by no means a complete list of methodsfor function extension. In all mentioned cases above, no higher than a fourth order method is obtained for solving thePoisson equation.In this paper, we present a new method, Partition of Unity Extension , or PUX, to compute a compactly supportedextension of a function. We assume that the values of a function f are known at all points of a regular grid that fallinside a domain Ω , and we want to compute the values of the extended function on this regular grid outside of Ω .The domain Ω can be multiply connected. In the PUX method, overlapping circular partitions or patches are placedalong the boundaries such that each is intersected by the boundary ∂ Ω and a local extension is defined on each patch.A second layer of patches is placed outside of the first, on which the local values are defined to be zero. These zeropatches enter the definition of the partition of unity function that is used to blend the local extensions into a globalone, imposing compact support and regular decay to zero. The choice of functions used to build up the partition ofunity function determines the regularity of the extended function.The local extensions on the patches intersected by ∂ Ω are determined using radial basis functions (RBFs). RBFcentres are placed irregularly with the same distribution for each circular patch, and an RBF interpolant is determinedvia a least squares problem, using the values of f on the regular points inside Ω . The values of the local extension arethen computed on the regular points inside the patch that fall outside of Ω . Always centring the patches at grid pointsof the regular grid, a matrix A can be precomputed once and be used for all patches. For each patch, an identificationis made of which points are inside and outside Ω and a local least squares problem with the relevant rows of A issolved with the inside data, a trivially parallel task.To assess the quality of a function extension it must be considered in its context of use, as there is no uniqueextension over the boundary of a domain. In this paper, we use it to solve the Poisson equation, using an integralequation approach. Thus the results by Askham et al. in [4] are suitable for comparison.The paper is organised as follows: in section 2 we detail how we solve the Poisson equation assuming an extensionof the right hand side f is known. In section 3 we introduce the concepts and techniques from RBF interpolation and2he partition of unity method that we need to introduce our method. The PUX method is presented in section 4, wherea function extension is constructed. Thereafter follows section 5 with a discussion of sources of errors associated withfunction extension and solving the Poisson equation. Section 6 is a summary, combined with implementation details,for solving the Poisson equation with the techniques described in this paper. In section 7 we perform numericalexperiments and carefully discuss parameter choices. Finally our conclusions and an outlook are presented in section8.
2. The Poisson equation on irregular domains in a boundary integral method environment
To understand why a function extension is useful for solving linear elliptic PDEs, and why its construction is moti-vated to pursue, we sketch the solution procedure. Consider the Poisson equation with Dirichlet boundary conditions,stated as ∆ u = f in Ω , (4) u = g on ∂ Ω , (5)where Ω is a simply or multiply connected compact domain in R with a Lipchitz continuous boundary, meaningcorners but not cusps are allowed. We will refer to the Poisson equation as the full problem. Introduce homogeneousand particular solutions u H and u P such that u = u H + u P . Let B be a simple box domain of size [ − L , L ] which embeds Ω , and let E be the complement of ¯ Ω relative B . We refer to E as the extension domain . If we know an extension f e of f from Ω to B , such that f e = f in Ω and supp( f e ) ⊂ B , the particular solution u P can be computed from ∆ u P = f e in R , (6) u P ∈ L ( R ) , (7)assuming f e can be constructed numerically, which is the focus of this paper. The homogeneous solution u H isobtained by solving the Laplace equation with modified Dirichlet boundary conditions, i.e. ∆ u H = Ω , (8) u H = g − u P | ∂ Ω on ∂ Ω . (9)The numerical treatment of the Laplace equation on Ω is discussed in subsection 2.1. It can be solved for the homoge-neous solution to high precision very e ff ectively with a boundary integral method. In our specific setting, the Dirichletboundary data will be modified according to the solution of the free–space Poisson equation (6), as given in (9). Ifthe extension f e of f is known and has compact support one can truncate the free–space Poisson equation (6)–(7) toa box; then a numerical solution can e ffi ciently be obtained by fast spectral methods, as described in 2.2. Therefore itis imperative f e has high global regularity. Note that both (6)–(7) and (8)–(9) are relatively simple problems to solvenumerically to high accuracy at a low computational cost, if said extension f e is simple to construct. The solution tothe full problem (4)–(5) is given u = u H + u P . Consider the Dirichlet problem for the Laplace equation, ∆ u H = Ω , (10) u H = ˜ g on ∂ Ω , (11)with Ω assumed to be simply connected for now. The numerical solution to a linear elliptic homogeneous PDEs isattractive to compute using a boundary integral method, as it can be both accurate and e ffi cient. The boundary inte-gral formulation becomes more dense if expressed with a complex representation, which is natural to adopt since weconsider the plane R . The use of complex notation will be isolated for treating (10)–(11) and all expressions have anequivalent counterpart in R . We will move freely between the two representations as we see fit. In C points will be3enoted τ and z , and the real and imaginary parts of any τ ∈ C are denoted Re { τ } and Im { τ } .By introducing an unknown density µ : ∂ Ω → R the solution u H of (10)–(11) can be represented with a doublelayer potential u H ( z ) = π (cid:90) ∂ Ω µ ( τ ) Im (cid:40) d ττ − z (cid:41) , ∀ z ∈ Ω . (12)Clearly it is su ffi cient to know µ on the boundary ∂ Ω to have an expression for u H anywhere in Ω . Thus only ∂ Ω needs to be discretised and the dimensionality of (10)–(11) has been reduced by one. To obtain the unknown densityfunction µ we treat (12) in the sense of a Cauchy principal value as we pass the limit z → z for z ∈ Ω and apply theboundary condition. This yields˜ g ( z ) = µ ( z ) + π (cid:90) ∂ Ω µ ( τ ) Im (cid:40) d ττ − z (cid:41) , z ∈ ∂ Ω . (13)The integrand has a well–defined limit as τ → z . This is a Fredholm integral equation of second kind. If ∂ Ω isLipschitz continuous the integrand is a compact integral operator, thus the Fredholm alternative states that (13) isuniquely solvable for µ [14]. This property is inherited by the discretised system, which is obtained by dividing theboundary ∂ Ω into panels of equal length and on each apply the 16 points Gauss–Legendre quadrature rule . The totalnumber of Gauss–Legendre panels is denoted N ∂ Ω . We use Nystr¨om’s method, meaning the collocation points are setto coincide with the quadrature points z j , with j = , . . . , N ∂ Ω . The resulting linear system is˜ g ( z i ) = N ∂ Ω (cid:88) j = j (cid:44) i µ ( z j ) ω j Im (cid:40) τ (cid:48) i τ i − z j (cid:41) + µ ( z i ) (cid:32) ω j Im (cid:40) τ (cid:48)(cid:48) i τ (cid:48) i (cid:41) + (cid:33) , i = , . . . , N ∂ Ω , (14)with ω j denoting quadrature weights. Here τ (cid:48) i and τ (cid:48)(cid:48) i mean that the boundary ∂ Ω has been parametrised as τ = τ ( t )and di ff erentiated at t i once and twice, respectively. The corresponding matrix–vector representation of (14) yields adense system matrix. A property of the discretisation of a second kind integral equation is that the condition numberdoes not increase with finer resolution of ∂ Ω . We solve for µ at the quadrature points with GMRES .Once µ is known, u H can be computed at the points z in Ω of interest by evaluating a numerical approximationof the integral in (12). The integrand, however, becomes nearly singular for z close to ∂ Ω , which causes a loss ofaccuracy if the regular panel based 16 point Gauss–Legendre rule is used. This can to some extent be remedied byincreasing the number of panels along ∂ Ω , as in Figure 1 where the centre image has twice as many panels as the left.Since the error increases exponentially fast as the boundary is approached [15], this can not provide accurate solutionsarbitrarily close to the boundary. Excellent results can however be obtained by applying special quadrature techniques.We use one such technique, namely an interpolatory quadrature method introduced by Ojala and Helsing [16].This method is based on expanding the complex density µ as a polynomial in the complex variable τ over one paneland invoking recursive formulas to analytically evaluate all integrals that are needed. The coe ffi cients of the polyno-mial are the solution of a Vandermonde system. It is however possible to solve the transposed problem instead, forwhich the right hand side depends only on the panel geometry and the location of the evaluation point, and not on thediscrete values of µ on the panel. Together with a scaling and rotation of the panel, this controls the ill–conditioningof the problem.If an evaluation point is within a panel length from a panel’s midpoint, we check if the special quadrature methodneeds to be invoked. Here, we use one of the above mentioned analytically known integrals from expanding µ andcheck the accuracy in the numerically obtained value (with the Gauss–Legendre quadrature rule) for this integral. Itcontains the same near singularity as (12) and is a good indicator of the accuracy that will be obtained by the regularquadrature. 4 igure 1: Example of pointwise relative error, in log –scale, in part of the computational domain when solving the Laplace equation using: normalquadrature using 35 panels (left), normal quadrature using 70 panels (centre) and special quadrature using 35 panels (right). So far only simply connected domains have been considered. To fix notation for the Laplace equation on multiplyconnected domains, let Ω consist of a finite ( κ + ∂ Ω consisting of ( κ +
1) closedcurves. These are denoted ∂ Ω , ∂ Ω , . . . , ∂ Ω κ , where ∂ Ω forms the outer boundary of the region Ω . Let the cavity Ω k be the set enclosed by ∂ Ω k , disjoint with Ω , for k = , , . . . , κ . For such a setting the associated integral equationto (12) has a nontrivial nullspace; consequently the Fredholm alternative is not applicable. Thus modifications to themethod described above are in order. We apply the remedy suggested in [17] and the references therein. Instead of(12), the solution to Laplace equation on multiply connected domains is u H ( z ) = π (cid:90) ∂ Ω µ ( τ ) Im (cid:40) d ττ − z (cid:41) + κ (cid:88) k = A k log | z − z k | , ∀ z ∈ Ω , (15)where z k is an interior point of Ω k , and the coe ffi cients A k and the density µ are unknown. By imposing (cid:90) ∂ Ω k µ ( τ ) d | τ | = , for k = , , . . . , κ (16)where integration is due to arc length, the resulting boundary integral equation ˜ g ( z ) = µ ( z ) + π (cid:82) ∂ Ω µ ( τ ) Im (cid:110) d ττ − z (cid:111) + κ (cid:80) k = A k log | z − z k | , z ∈ ∂ Ω . (cid:82) ∂ Ω k µ ( τ ) d | τ | = , for k = , , . . . , κ. (17)satisfies the Fredholm alternative. Hence the coe ffi cients A k and the density µ can be solved for simultaneously via(17). Observe that the special quadrature techniques for evaluating (15) are the same as for the simply connected case,as the points z k are chosen so that log | z − z k | pose no numerical di ffi culties to evaluate.While not necessary to solve the Laplace equation, a useful aspect of (12) is that for µ ≡ π (cid:90) ∂ Ω Im (cid:40) d ττ − z (cid:41) = , if z ∈ Ω , / , if z ∈ ∂ Ω , , if z (cid:60) ¯ Ω . (18)Thus the framework for solving the Laplace equation can be used to identify points as in Ω , on the boundary of Ω or outside ¯ Ω . 5 .2. Particular solution The particular solution to the full problem (4)–(5) is acquired by solving the free–space Poisson equation (6)–(7).A key component is an extension f e of f ; its construction with PUX is described in section 4. If f e is known, suchthat f e = f in Ω and supp( f e ) ⊂ B , the solution to (6)–(7) is known to be u P ( x ) = (cid:90) R K ( (cid:107) x − y (cid:107) ) f e ( y ) d y = (cid:90) B K ( (cid:107) x − y (cid:107) ) f e ( y ) d y , (19)where K is the Green’s function. In R one has K ( r ) = − π log( r ) . (20)For a derivation see any basic textbook on the subject, e.g. [18]. To handle (19) numerically we apply the approachsuggested by Vico et al. in [19]: since f e has compact support in B , the Green’s function can be truncated withoutchanging the value of u P ( x ) in (19) for x ∈ B . Replace K with˜ K ( r ) = − π log( r ) for r < R = L , , (21)where [ − L , L ] is the box B containing supp( f ); 3 / L is slightly larger than the greatest possible distance betweentwo points in B . Denote the Fourier transform of ˜ K as ˆ˜ K , which has a closed analytical expression, namelyˆ˜ K ( k ) = − J ( Rk ) k − R log( R ) J ( Rk ) k , (22)where J and J are the Bessel functions of the first kind of order 0 and 1, respectively. Here k = (cid:107) k (cid:107) with k ∈ R ,thus it is radial. Furthermore, the limit as k goes to 0 is well–defined:lim k → ˆ˜ K ( k ) = R − R )) . (23)The Fourier coe ffi cients for u P are ˆ u P ( k ) = ˆ˜ K ( k ) ˆ f e ( k ) , k ∈ R , (24)and the solution u P ( x ) is given by the inverse Fourier transform u P ( x ) = π ) (cid:90) R ˆ u P ( k ) e i k · x d k . (25)Clearly it is beneficial to have ˆ˜ K ( k ) instead of ˆ K ( k ) = − / k , which has a singularity at k =
0. The integral in (25) canbe discretised with the trapezoidal rule, which allows the usage of an inverse FFT to compute it. It will be su ffi cientlyresolved in k –space if upsampling of the FFT is done with a factor greater or equal to 5 / R = / L [20]. Hence,to numerically obtain u P on a uniform grid in B we apply FFTs to transform the values of f e and ˜ K given on a uniformgrid. The FFTs are zero padded to obtain the required upsampling, and an inverse FFT is applied after multiplicationof the coe ffi cients (25). This zero padding avoids pollution from periodic copies of the truncated kernel, and can bereduced to a factor of two after a precomputation step for ˆ˜ K , see [19]. This is the standard minimum oversamplingto compute an aperiodic convolution. This yields an FFT based method for the free–space Poisson equation that con-verges spectrally as the uniform grid over B is refined, if f e is smooth.To solve the full problem (4)–(5) the solution u p is needed on ∂ Ω to define the modified boundary conditions forthe Laplace equation (8)–(9). A non–uniform FFT with the Fourier coe ffi cients ˆ u P obtained above allows us to evaluate u P on ∂ Ω [21]. 6 . Background for interpolation with RBFs and partition of unity methods The PUX–method is based on the structure of interpolation with radial basis functions (RBFs) and partition ofunity methods. This section aims to present a comprehensive background for these techniques, before the functionextension is discussed in section 4. An informal description of RBFs and partition of unity methods for the interpola-tion problem can be summarised as follows. The interpolation domain Ω is covered with overlapping partitions. Oneach partition a local interpolant, based on f , is constructed with RBFs. These are combined into a global interpolantvia a weighted sum, where the weights sum to 1 everywhere in Ω . First we cover interpolation through collocation with RBFs for some general data set. Then we discuss the some-what more involved least squares approach, which is the one we will apply. To interpolate over a set of scattered datawith a collocation approach, we set each data point to be the centre for an RBF and enforce known function values atthese points. This results in a linear system we solve to obtain interpolation weights. Once solved for we can evaluatethe interpolant at other desired locations.To be more precise: Let s f , Ω be the interpolant of f ∈ C k ( Ω ) on the domain Ω ⊂ R such that s f , Ω ( y ) = M (cid:88) j = λ j ϕ j ( y ) , (26)where ϕ j ( y ) = ϕ ( (cid:107) y − x j (cid:107) ) is the RBF centred at x j and y is some point in R . RBFs are univariate functions from R + to R , where R + = { x ∈ R | x ≥ } , that disregard geometry. Thus they are mesh free by construction. Given an RBFit forms a basis for the local approximation space N ϕ ( Ω ) referred to in the literature as the native Hilbert space . Itsnature depends on the RBF of choice, for more details see [22].One option to obtain the unknown interpolation coe ffi cients λ j ∈ R is by collocation at the RBF centres, i.e. for x i ∈ { x j } Mj = we require f ( x i ) = M (cid:80) j = λ j ϕ j ( x i ). This is equivalent to solving ΦΛ = F (27)for Λ = { λ j } Mj = , where Φ = { ϕ i ( x j ) } Mi , j = and F = { f j } Mj = with f j = f ( x j ). The method for obtaining the coe ffi cientsthrough solving (27) with Gaussian–elimination will be referred to as RBF–Direct .An RBF ϕ is said to be positive definite if the resulting interpolation matrix Φ is positive definite. The matrix Φ is symmetric due to the nature of RBFs. There is a plethora of positive definite RBFs and we consider only suchfunctions.Associated with certain RBFs is the shape parameter ε ∈ R + , which sets the flatness of the RBF. Let r = (cid:107) y − x j (cid:107) and consider for example the Gaussian ϕ ( r ) = e − ( ε r ) , (28)for which N ϕ ( Ω ) is a subset in the Sobolev space W m ( Ω ) for any m . A function f is an element of N ϕ ( Ω ) for a given ε if the square root of its Fourier transform decays faster than the Fourier transform of (28). For further reading see[22]. As ε goes to zero, the RBF becomes increasingly flat and consequently the columns in Φ become increasinglylinearly dependent. Thus the condition number of Φ grows and the weights { λ j } Mj = become large and oscillatory.However, these are purely numerical e ff ects since the RBFs do indeed form the basis for an approximation space withnice properties. In fact, the accuracy of the approximation increases as ε goes to zero [23], but this e ff ect may notbe discernible due to high condition numbers, see Figure 2. The appropriate range for ε is problem dependent andno general optimal value can be chosen, although according to theory the smaller ε the smaller the error in infiniteprecision. 7 egularity ϕ ( r ) C , (1 − r ) + (2 + r ) C , (1 − r ) + (8 + r + r ) C , (1 − r ) + (4 + r + r + r ) C , (1 − r ) + (8 + r + r + r + r ) C , (1 − r ) + (6 + r + r + r + r + r ) Table 1: Wu functions with compact support in r ∈ [0 ,
1) in C k , ˜ k : k is the regularity at origin and ˜ k is the regularity at the edge of the support. Theyare positive definite up to R . Furthermore ( · ) + = max (0 , · ). Apart for Gaussians other common choices are di ff erent compactly supported RBFs, such as the so called Wu functions. In the literature Wu functions are tabulated after their order of regularity at origin, see for example [24]and Table 1. In our application the regularity at the boundary of the support is of larger interest, where it tends to begreater. This will become apparent in subsequent sections. Thus we will denote the space of continuous functions ofregularity k at the origin, but ˜ k at the edge of the support, as C k , ˜ k .Given a compactly supported RBF of regularity 2 k at origin the corresponding native Hilbert space is the Sobolevspace W d + k + ( Ω ), with Ω ⊂ R [25]. If the compact support is small enough they yield a sparse structure for Φ .However this is a trade–o ff for accuracy, which increases with support size [24]. We will return to the choice betweenglobal and local RBFs. − − − − − Shape parameter ε I n t e r p o l a t i o n e rr o r RBF–DirectRBF–QR Shape parameter ε C o nd i t i o nnu m b e r RBF–DirectRBF–QR
Figure 2: Illustration of typical interpolation problem for RBF–Direct for ε ∈ (0 ,
5] and RBF–QR for ε ∈ (0 , Φ . As mentioned above, the ill–conditioning of Φ as the shape parameter ε goes to zero is a purely numerical e ff ectand it can be partially circumvented by replacing RBF–Direct with stable methods, such as RBF-QR . This paper doesnot aim to fully explain the RBF–QR algorithm, instead we refer to [23] for details. The main idea is that the approxi-mation space spanned by positive definite RBFs contains good solutions to the interpolation problem (27), therefore achange of basis can remove the ill-conditioning of Φ . This change of basis is accomplished by the RBF–QR algorithmand its positive e ff ects are clear from Figure 2. Note that for large values of ε and large distances between RBF centres8he RBF–QR is neither needed nor worthwhile, and should hence not be used.As we proceed to the function extension problem, we will have values of f defined on points from a uniform FFTgrid. It is a well–known fact in the RBF–community that a uniform point distribution for RBF centres yields a largercondition number. The e ffi ciency of RBF–QR is hampered as well, as it would be limited to collocating at about 100uniform grid points.To obtain a more robust and e ffi cient method, we decouple the centres and the data points, and find an interpolant inthe least squares sense, as is done in [26]. Consider a non–uniform distribution of M RBF centres at locations { x j } Mj = ,but the values of f given at N uniform point locations { ˜ x i } Ni = . See Figure 3 for a union of uniform data locationsand non–uniform RBF centres. We seek an approximation of f at the non–uniform points, that is F = { f ( x j ) } Mj = ,in order to create an interpolant with non–uniform RBF centres and data like (27). Introduce ˜ Φ = { ϕ j (˜ x i ) } N , Mi , j = and˜ F = { f (˜ x i ) } Ni = , where ˜ Φ is an N × M –matrix. Recall that Φ and Λ is of size M × M and M ×
1, respectively. Then,replacing Λ by Φ − F from (27), we have ˜ ΦΛ = ˜ F ⇔ ˜ ΦΦ − F = ˜ F , (29)and let A = ˜ ΦΦ − . Obtaining the unknown F from (29) is a least squares problem and numerically we require that N is su ffi ciently larger than M to yield good solutions. This approach allows us to use non–uniform RBF centreswhich significantly improves the stability, but still lets the data be represented on the uniform grid. Furthermore,the ill–conditioning of Φ , associated with the shape parameter ε , is reduced by the use of RBF–QR. The RBF–QRalgorithm is intended for a formulation as (29), since it computes A , rather than Φ − , which acts as a mapping of datafrom non–uniformly to uniformly distributed locations. With F known we can interpolate to obtain values F y at somelocations { y k } Kk = by creating A y = Φ y Φ − where Φ y = { ϕ j ( y k ) } K , Mk , j = ; then F y = A y F which we set to be s f , Ω ( y k ) for k = , . . . , K . Figure 3: Schematic figure of RBF centres and uniformly distributed data points.
The distribution of non–uniform RBF centres is the same as in [26] and is a
Vogel node distribution defined as x j = (cid:114) jM (cid:16) cos ( j π (3 − √ , sin ( j π (3 − √ (cid:17) , j = , . . . , M , (30)and is quasi–uniform. In Figure 3 the 28 mint green dots are Vogel nodes. Such a distribution of RBF centres isnear–optimal and the RBF–QR algorithm performs well up to about 400 centres. Recall that in order obtain good9pproximations by solving the least squares system N must be su ffi ciently larger than M .A drawback when using Gaussians is their global nature and consequently the resulting interpolation matrix Φ islarge and dense. An alternative would be to use compactly supported RBFs, but to obtain good approximations theirsupport needs to be large, resulting again in a non–sparse matrix structure. This encourages the implementation of apartition of unity method, which is often used in combination with RBFs. It decouples the size of Φ and the choice ofthe RBF used for interpolation. The idea of a partition of unity approach is to combine local approximations s f , Ω i of the function f on partitions Ω i , i = , . . . , N p . These partitions form the set { Ω i } N p i = . They are overlapping and constitute a covering of Ω , meaning¯ Ω ⊂ N p (cid:91) i = ¯ Ω i . (31)Associated with this covering we construct a family of compactly supported and continuous functions { w i } N p i = suchthat supp( w i ) = Ω i for every i = , . . . , N p and N p (cid:88) i = w i ( y ) ≡ , ∀ y ∈ N p (cid:91) i = ¯ Ω i . (32)Each weight function w i corresponds to a partition Ω i in the covering. They are constructed as a weighted average ofcompactly supported RBFs which we denote ψ , to distinguish them from the Gaussians ϕ . This is often referred to inthe literature as Shepard’s method [27]: w i ( y ) = ψ i ( y ) N p (cid:80) j = ψ j ( y ) , (33)where i ∈ , , . . . , N p . An RBF ψ i is centred at p i ∈ R and its support defines the associated partition Ω i . The set { p i } N p i = contains all partition centres.We will use the same distribution of non–uniform RBF centres for { ϕ j } Mj = for every partition. Thus the centres { x j } Mj = can then be selected for near–optimal approximation properties, where Vogel node distribution is one suchexample, see (30). The approximation s f of f on Ω is a weighted sum of the local approximations s f , Ω i , i.e. s f ( y ) = N p (cid:88) i = w i ( y ) s f , Ω i ( y ) (34)where the local approximations are obtained through interpolation with RBFs: s f , Ω i ( y ) = M (cid:88) j = λ j ϕ j ( y ) , (35)see subsection 3.1. From now on we will always use Gaussians as RBFs for interpolation and all partitions are discs.As we proceed with function extension the local approximations are only required for partitions partially in Ω . Forpartitions entirely in Ω constructing a local extension is redundant, since f is known there.
4. Function extension by PUX
In this section we describe how to construct a compactly supported function extension of high global regularity.To make the description of the PUX–method accessible we will first extend a function with extrapolation, using toolspresented in the previous section. For this extension we do not control how f e approaches zero. Thereafter we applythe full PUX–method to construct a compactly supported function extension with a chosen regularity.10 .1. Extrapolation For extrapolation of f defined on a bounded and closed domain ¯ Ω ⊂ R the following procedure is used. Partitioncentres { p i } N p i = , i.e. centres for the compactly supported RBFs we denote ψ i , are distributed uniformly with respect toarc length along the boundary ∂ Ω as in the left image in Figure 4. Each partition centre is associated with a partitionfor which a local extension is constructed. They are referred to as extension partitions . The partition centres are movedto the closest uniform grid point in Ω , such that if all extension partitions have the same radius R p , a single matrix A from expression (29) can be precomputed and reused for every partition Ω i to approximate the local interpolants s f , Ω i , i = , . . . , N p . Doing so is vital for the e ffi ciency for PUX, as RBF–QR is required for accuracy but is 10 to 15 timesmore computationally expensive than RBF–Direct.We introduce some useful notation: let y E = { y ∈ E } and y Ω = { y ∈ Ω } , where E is the complement of ¯ Ω relative B . Any combination of subindices simply means points in the associated intersection, e.g. y E , i = { y E ∈ Ω i } . Thepoint distribution between partitions di ff er only with regard to which points that belong to Ω or E . See Figure 4 andcompare with Figure 3; observe that we have RBF centres outside of Ω . For some partition centre p i we precomputea single matrix A based on all uniform points and Vogel nodes within distance R p of the centre. For a given partition Ω i we identify the uniform points y Ω i as belonging to Ω or to E . By simple row manipulation we rewrite (29) as (cid:34) A Ω A E (cid:35) F = (cid:34) ˜ F Ω ˜ F E (cid:35) (36)where the subscripts denote which set the points y Ω i belong to. Since ˜ F Ω is known we can solve A Ω F = ˜ F Ω for F in least squares sense. Thus the number of defined interior values for f , i.e. the size of ˜ F Ω , must be greater thanthe number of RBF–centres, i.e. Vogel nodes, in order to have an overdetermined least squares problem. The localextension is computed as s f , Ω i ( y Ω i , E ) = ˜ F E = A E F . (37)To ease notation we refer to s f , Ω i ( y Ω i , E ) as f ei when no ambiguity can arise. The global extension f e is the weightedcombination of all local extension evaluated for ∪ i y Ω i , E , but for ∪ i y Ω i , Ω we set f e = f and f e = Figure 4: Left: Schematic figure of distribution of extension partitions along ∂ Ω for a complex domain. The green markers correspond to RBF–centres and the distribution is repeated for every partition. Right: Schematic figure of identifying uniformly distributed points as inside or outside Ω . a) Function f from (46) on starfish shaped domain Ω . (b) Extrapolation of (46) as in subsection 4.1.(c) Blended with a layer ofzero–partitions, see subsection 4.2. (d) High regularity function extension f e with compact support,see (38). Figure 5: Illustration of various steps of function extension with PUX. igure 6: Schematic image for function extension from a star shaped domain Ω given by the black border. Observe that in this figure the partitionsare not centred at uniform grid points. The red overlapping circles are the partitions. The yellow section corresponds to the uniform data pointsused for creating the local extension f ei , the blue section to points where f e = To construct a compactly supported function extension f e of f , which is continuous or of higher regularity as itis extended by zero outside of its support, we modify the extension obtained in the previous subsection. Yet anotherlayer of partitions is added such that it overlaps the extension partitions. These partitions do not intersect ¯ Ω and nointerpolation is performed for them. They are referred to as zero partitions and the corresponding set is denoted as { Ω i } N p i = . The associated s f , Ω i is set to be identically equal to zero for i = , . . . , N p .The weight function in a partition, as defined in (33), is zero at the boundary of the partition. Hence, as the localextension in the first layer of partitions are blended with the zero values in the zero partitions, the global extension thatis defined will be forced to zero over the overlapping region. See Figure 5c where the local extensions are suppressedto zero in the overlaps. Hence zero partitions should be placed such that f e has a controlled decay to zero and thatthe size of the overlap with extension partitions are about the same. Thus the global extension will in these parts havethe same regularity as w , i.e. the compactly supported RBF ψ . Observe that there always is jump over ∂ Ω , since ananalytical expression is used for points inside Ω and an approximation outside. It is not numerically discernible if theapproximation is good enough. However, if poor then f e is an extension of a poor approximation of f , i.e. anotherfunction. This occurs if e.g. the number of RBF–centres per partition is too few. Consequently the jump ∂ Ω will beof such magnitude that f e will behave as a discontinuous function.Once f e is obtained it is extended by zero to a box B = [ − L , L ] that embeds the support of f e . The expression fora function extension obtained by PUX is f e ( y ) = f ( y ) , y ∈ Ω , N p + N p (cid:80) i = w i ( y ) s f , Ω i ( y ) , y ∈ N p (cid:83) i = y Ω i , E , , otherwise . (38)The sum also includes the zero partitions to emphasise that the weight functions in extension partitions are a ff ectedby them; even though the local values in a zero partition are identically zero. The function f e evaluated on a uniformgrid in B can be used to solve (6)–(7) for the particular solution to the full problem.13 . Sources of numerical errors Given a function f defined on a bounded domain Ω , there is no unique compactly supported extension to R .To measure the quality of the function extension obtained by PUX for a set of parameters, we study the numericalsolution to the full problem (4)–(5). Consequently, the errors directly associated with PUX cannot be isolated andanalysed separately, but we can give an account for sources of numerical errors. In this section we discuss the localinterpolation error, the error originating from approximating local interpolants with least squares, errors dependenton the choice of the compactly supported RBF to construct the weight function, errors from using an FFT–solver forthe free–space Poisson equation (6) with the truncated Green’s function and errors associated with boundary integralmethod for solving the Laplace equation (8)–(9).First we briefly discuss the error from solving the Laplace equation. Assume ∂ Ω is su ffi ciently resolved with 16point Gauss–Legendre panels. Then the error associated with the boundary integral method for solving the Laplaceequation converges rapidly as the number of Gauss-Legendre panels increases. The resolution of ∂ Ω is chosen a prioribased on g , therefore it is important that the modified boundary conditions g − u p are not much harder to resolve than g . If this is true, then the error associated with the boundary integral method can with e ffi ciency and without di ffi cultybe controlled such that it does not dominate.Now we turn our attention to the errors connected to function extension and solving the free–space Poisson equa-tion (6). We start with the local interpolation error in some partition Ω i ; to initially simplify the analysis assume { λ j } in (26) is obtained by collocation, i.e. not with least squares. If the RBFs { ϕ j } are Gaussians, then according to [28]the following estimate for the interpolation error holds: (cid:107) f − s f , Ω i (cid:107) L ∞ ( ¯ Ω i ) ≤ e C log ( h ) / √ h (cid:107) f (cid:107) N ϕ , (39)where C is a constant depending on the space dimension, the norm and ϕ , but not on f or h . Here h is the fill distance :the diameter of the largest ball possible to fit between the RBF centres in a partition and thus a measure of the densityof the RBF–centres. Further (cid:107) · (cid:107) N ϕ is the norm associated with the native space N ϕ . Although technically only truefor f ∈ N ϕ it works well in practise for smooth functions and especially bandlimited functions. For further discussionsee [29]. From (39) we conclude that the interpolation error in each partition for collocation converges spectrally aswe increase the number of RBF–centres within a partition. We note that the estimate (39) still holds when s f , Ω i isobtained with least squares, since the same RBFs are used and therefore the approximation space is the same as forcollocation [26]. However, this will not be true for the extension.To understand what happens with the interpolation error when the local interpolants are weighted together with apartition of unity method we need the concept of regular covering : Assume that each y ∈ Ω only belongs to a finitenumber of Ω i , that each partition contains enough data points to allow a unique interpolant and that each partition Ω i satisfies an interior cone–condition [30]. A covering { Ω i } N p i = satisfying these requirements is denoted a regularcovering. In our applications, these conditions are easily met and we will assume that our coverings are regular.A remarkable property of partition of unity is that the global approximation order of s f inherits the local one for s f , Ω i , under the assumptions that the covering { Ω i } N p i = is regular [30]. This property can be understood intuitively, justconsider the following example. Let y ∈ Ω ∩ Ω and assume the local approximant is exact up to some tolerance (cid:15) i ,i.e. (cid:107) f − s f , Ω i (cid:107) L ∞ ( ¯ Ω i ) ≤ (cid:15) i for i = ,
2. Then | f ( y ) − s f ( y ) | ≤ (cid:88) i = w i ( y ) | f ( y ) − s f , Ω i ( y ) | ≤ ( w ( y ) + w ( y )) (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) = max( (cid:15) , (cid:15) ) (40)and the potential error from low order weight functions is not noticeable, as they form a partition of unity and thussum to one at every point belonging to a partition. Instead, the global interpolant inherits the local approximation order.14ow we consider the function extension and discuss its regularity and influence on the error, still assuming thatthe local interpolants are obtained through collocation instead of approximated with least squares. The speed at whichthe extension goes to zero is controlled by the choice of RBF we use as ψ and on the size of the overlapping region.A smaller region means a more rapid decay, thus a high grid resolution might be required to resolve the extension.Therefore it is preferable, from this point of view, to have as much overlap between extension partitions and zero–partitions as possible, and to have a large partition radius. On the other hand, the extension may not be well–behavedfar from the boundary and a greater partition radius requires more RBF-centres to resolve f . The RBF–QR algorithmis not capable of removing ill–conditioning, associated with small values of ε , for A with over 400 RBF centres. Fur-thermore, the larger the partition radius R p is the more we lose locality.Increasing the number of RBF centres does not necessarily increase the accuracy, instead it can make the situationworse. Compare with polynomial interpolation, where a higher order polynomial basis means more oscillations. Asimilar e ff ect is present for RBF–interpolation. A remedy is to make the partitions smaller, instead of increasing thenumber of RBF centres. However, this requires a finer resolution of the uniform grid, as smaller partitions implies ashorter span for the extension to go to zero.Theoretically, things get more complicated when we consider the function extension obtained by least squares.With collocation the interpolant and f would agree on all uniform data locations in Ω , where as for the least squaresinterpolant, there can be a discrepancy at these points. Defining f e according to (38), where the original values of f are used inside Ω , there will hence be a discontinuity across ∂ Ω of the size of the error in the least squares interpolant.However, this discontinuity is a technicality, since it can be made arbitrary small by controlling (39). If no such mea-sures are taken the error for solving the full problem decays with second order as the uniform grid is refined. Since A can be precomputed once and used for all partitions, we can set the number of RBF–centres M su ffi ciently large andmake the partitions appropriately small in order not to su ff er from a poor approximation of the interpolant due to leastsquares.We now discuss the weight functions’ influence on the accuracy for numerically solving (4)–(5) . The intersectionbetween a zero and an extension partition is the distance over which the weight functions suppress the extension tozero. This implies that a high grid resolution may be required not to resolve f on Ω , but to resolve the suppression tozero over the aforementioned intersection. Further the extension will inherit the regularity of the compactly supportedRBF used to create the weights w . Thus we may expect an asymptotic convergence of 4 + ˜ k for an RBF with regularity C k , ˜ k : 2 orders for solving the free–space Poisson equation plus 2 + ˜ k since the ˜ k :th derivative has bounded variation[31]. The greater the regularity of the Wu RBF the harder it is to resolve, thus requiring a finer uniform grid. Thereforewe will not coinsider infinitely smooth compactly supported RBFs as weight functions, such as the construction byYing et al. [32] . In section 7 we show how to optimally choose the regularity ˜ k of the compactly supported RBF,given a resolution of the uniform grid on B and partition radius R p .Finally we briefly mention errors associated with evaluating the solution to the free–space Poisson equation with atruncated kernel by FFTs and for obtaining u P on ∂ Ω with non–uniform FFT. Replacing the Green’s function K withthe truncated Green’s function ˜ K in (19) involves no approximation. The main source of error is resolving the Fouriertransform of ˜ K and f e with FFTs. Given a smooth right hand side f e the error for solving (6) decays spectrally as theuniform grid is refined, assuming method parameters are set appropriately [19]. Concerning the non–uniform FFT ittakes a given tolerance as input argument, and assuming su ffi ciently many Fourier coe ffi cients are available no greatererror than the set tolerance will be introduced.
6. Summary and implementation of entire solution procedure
Guidelines for picking appropriate values for the numerical implementation are given in the next section. In thissection the computational procedure for solving the full problem (4)–(5) is summarised, alongside implementationdetails: 15
Discretise ∂ Ω into N ∂ Ω Gauss–Legendre panels, each with 16 Gauss–Legendre points. Set R p as radius for allextension partitions and let them overlap by slightly more than a radius. This yields a number N p of extensionpartitions { Ω i } N p i = , with corresponding partitions centres { p i } N p i = , distributed uniformly with respect to arc lengthalong ∂ Ω . • Let Ω ⊂ B = [ − L , L ] , which must contain all y ∈ ∪ i ¯ Ω i , and let E be the complement of ¯ Ω relative B . At thelocations E \ ∪ i ¯ Ω i the extended function will be zero and need not to be included, see (38) or Figure 7. Forsome N u construct a uniform grid on B with resolution 2 L / N u to be used by the FFT–solver. Sort the N u uni-form grid points as inside or outside ¯ Ω . To identify a point y as belonging to Ω or E , one can evaluate (18) for it. • Relocate each partition centre p i to the closest uniform grid point inside Ω to allow for precomputation of A andto avoid reducing the regularity of f e by evaluating the RBFs at their origin. • Pick a compactly supported RBF for ψ to construct the weight functions { w i } N p + N p i = with Shepard’s method (33).We use Wu–functions, see Table 1 and choose among them based on ˜ k , which denotes the compactly supportedRBF’s regularity, with a neighbourhood around the origin excluded. This value sets the regularity of the globalextension f e . • The shape parameter ε for the Gaussians used as the RBF for interpolation (26) needs to be set. The numberof RBF centres M , i.e. Gaussians, inside each partition and which distribution must be specified too. We usea Vogel node distribution (30) and for a given mesh size N u and partition radius R p the amount M must beset such that (29) is an overdetermined system. If the partition radius R p is appropriately small in relation tothe curvature of ∂ Ω , each partition will contain roughly the same amount of uniform data locations where f isknown. This does not only give the same interpolation qualities on each patch, but is a measure to ensure thatfor each patch the least squares problem (29) is su ffi ciently overdetermined. In the next section we will give aguide to choosing all aforementioned parameters. • To precompute the matrix A pick some point on the uniform grid. Within distance R p of this grid point, find theuniform grid points { ˜ x i } Ni = and distribute a set of Vogel nodes { x j } Mj = , see Figure 3. For these points we compute A = ˜ ΦΦ − with RBF–Direct, where ˜ Φ = { ϕ j (˜ x i ) } N , Mi , j = and Φ = { ϕ j ( x i ) } M , Mi , j = , see paragraph above expression (29).If the condition number of Φ is not of moderate size, then A is recomputed with RBF–QR. An open–sourceimplementation in M atlab for computing A with RBF–QR can be found at [33]. This is done only once, and A is reused for all extension partitions. • For each partition Ω i the precomputed matrix A needs to be separated into A Ω and A E , as in (36). The patchspecific separation depends on which uniform points y Ω i that belong to E or Ω . Standard solvers are used tosolve the least squares systems, see the paragraph below expression (36), in order to evaluate the local extension f ei by (37). • It is also possible to precompute the local component ψ used for constructing the weight functions (33). As forprecomputing A , simply evaluate ψ for all uniform points within distance R p of some partition centre. Thesevalues can be used for all partitions to construct the partition of unity weights. • The N p zero partitions must at least overlap the boundary of the union of Ω and the extension partitions inorder to have a controlled decay of f e to zero. One simple way of achieving this is to distribute N p = N p points uniformly, with respect to arc length, on the boundary. Place the zero–partitions in the normal directionat a distance of R p from the boundary ∂ Ω . Initially their radius is set to R p , but as they should not intersect ¯ Ω rescaling may need to be required. Thus the zero–partitions can have varying radius.16 Combine the local extensions { f ei } by weight functions to obtain the global extension f e of f , given by (38), foreach point from the uniform grid in B . • Since f e has compact support in a box B embedding Ω we can apply the method in [19], as explained in sub-section 2.1. Hence we use the precomputed truncated spectral representations of the Green’s function (23), toevaluate the solution to (6)–(7) with an FFT, upsampled by a factor of 2. This yields the particular solution u p on a uniform grid in B and also the Fourier coe ffi cients of u P . Evaluate u P at the Gauss–Legendre points givenby the Gauss–Legendre panels along ∂ Ω with a non–uniform FFT. The non–uniform FFT we apply is describedin [21] and we use their open source library. As input it takes an error tolerance which we set to 10 − . Thesevalues are used to obtain the modified boundary conditions g − u p for (8)–(9). • With u p known on the boundary, we can solve (14) numerically with GMRES for µ at the Gauss–Legendrepoints, as described in subsection 2.1. The required modifications when Ω is a multiply connected domain aregiven in [17]. Obtaining u H in Ω is just a matter of post processing, where the special quadrature is applied topoints in Ω close to ∂ Ω . The final solution to (4)–(5) on a uniform grid in Ω is u = u H + u P . Note that we arenot restricted to evaluating u at these locations. Since the Fourier coe ffi cients ˆ u P and the density µ are knownwe can evaluate u at any point in Ω .
7. Numerical Results
This section is organised as follows. We start with a general discussion of how to set the various parameters. Thena strategy for finding appropriate values is presented, based on experiments for a simple numerical setting. Thereafterthis strategy is shown to work also for choosing parameters for more advanced settings. Finally we present sometimings to give an idea of the complexity of function extension in relation to solving the free–space Poisson equation.With the error we refer to the relative discrete (cid:96) error for solving the full problem (4)–(5), measured as (cid:107) u exact − u numerical (cid:107) (cid:96) (cid:107) u exact (cid:107) (cid:96) , (41)where (cid:107) u (cid:107) (cid:96) = (cid:118)(cid:117)(cid:117)(cid:116) N eval (cid:88) i = | u i | / N eval (42)for a vector of length N eval . We measure the error on an evaluation grid, which is a problem dependent uniform gridwith resolution 2 L / N eval , where N eval = Re ( n i θ ) c + (cid:88) j ( c j cos ( j θ ) + d j sin ( j θ )) + a + ib (43)where θ ∈ [0 , π ) and n gives the orientation. The non–zero coe ffi cients are stated for each complex multiply con-nected domain.To obtain a numerical solution to the full problem (4)–(5) the following parameters need to be set: • ε : the shape parameter for the Gaussians used as basis for interpolation. • R p : partition radius. • N u : number of uniform grid points in one spatial direction.17 M : The number of Vogel–nodes (30), i.e. the number of RBF centres per extension partition. • N ∂ Ω : Number of Gauss–Legendre panels. Each panels has 16 Gauss–Legendre points. • L : Length of the side of the box shaped computational domain B . • ˜ k : The regularity of the compactly supported RBF used to compute the weights (33).The amount of overlap between partitions also has to be set. We always let them overlap by slightly more than a radius R p . A large overlap gives the extension f e a more uniform band around Ω where it decays to zero, which grants amore predictable behaviour. Thus this parameter does not vary for di ff erent numerical settings.In general, the solution is not sensitive to the shape parameter ε , as long as ε is within an appropriate range ofvalues. We use ε = ff erent domains and functions than presented in this paper. Observe that the righthand side (46) for example 1 is an element of the native space N ϕ for ε =
2, which is not the case for the right handsides (49) and (50), correspodning to example 2 and 3.Another problem dependent parameter is the partition radius R p . It is strongly related to the grid resolution N u andthe number of RBF centres M , i.e. the number of number of Vogel nodes (30). Recall that the RBF–QR algorithmhas limited capacity for M , which is around 400. Thus the greater the variation of f the smaller partitions are requiredto obtain good approximations for the local interpolants. The choice of M depends on the given f , and will thus bedi ff erent for each numerical experiment.There is a simple way to find an appropriate value for R p and to obtain a range for good choices of ε : measure theresidual for solving the least squares problem (29) on some partition. A large residual implies that either ε or R p needsto be changed. Another indication that smaller partitions are needed is if max y ∈ y Ω i , E | s f , Ω i ( y ) | from (37) is significantlylarger than max y ∈ y Ω i , Ω | f ( y ) | for any partition Ω i ; the extension should take on values in the same range as f .By construction f e decays to zero over a distance of approximately R p , meaning smaller partitions may requirea larger value for N u to resolve f e . Thus one may wish to set R p as large as possible, in order to avoid constructingan f e much harder to resolve than f . However, in most events N u is chosen to resolve f e close to the boundary of itssupport, not f on Ω . For di ffi cult right hand sides, such as (49) from example 2 and (50) from example 3, the scale isabout the same. For simpler functions resolving f e is clearly harder, see for example (46).Previously we have said that (29) needs to be overdetermined, but no quantitative measure has been given of howmuch the solution is improved if more data points y Ω i , relative to M , is used. Therefore we introduce the measure β i = number of data points in partition Ω i M , (44)and β min = min i β i . It is an useful tool for analysing the relationship, in terms of error, between number of RBF centres M and the available data in an extension partition, which is based on N u .The larger ˜ k , the higher regularity of f e and thus faster decay of the Fourier coe ffi cients. But we also need totake into account that a larger ˜ k implies that a finer grid is required to resolve the compactly supported RBF. In otherwords ˜ k is strongly related to R p and N u , just as M is. In the following subsection we devise a scheme via an heuristicapproach to choose ˜ k and M . It is based on the measure of the number of uniform grid points per partition radius ,which we denote P and is given by P = N u L R p . (45)The length L of the sides of the box B is set such that all y ∈ ∪ i ¯ Ω i are in B . The number of Gauss–Legendre panels N ∂ Ω is chosen large enough to resolve g , which in most events is su ffi cient to resolve g − u p as well. This is verifiednumerically in subsections 7.2 and 7.3. 18 .1. Example : Parameter selection As right hand side for (4)–(5), consider the smooth function f ( x , y ) = − sin (2 π x ) sin (2 π y ) , (46)defined in a disc centred at (17 / , / ff erent errorsand discuss how to choose parameters.Following the approach given above we set: L = . R p to 0 . N u varies from 40 to 500, and for each value the parameters M and ˜ k need to be set. Figure 7: Regular covering of the disc Ω used in subsection 7.1. Red circles are extension partitions, black are zero–partitions. Red and black starsare centres for the extension partitions and zero partitions, respectively. We begin by investigating the influence of the regularity of the weight function on the error convergence. Toisolate this error, we want to remove the error from the local extension. Hence, instead creating an extension of f asin (37) we pick a smooth f defined in all of B and set for each interpolation patch s f , Ω i = f . Only the choice of weightfunction, i.e. ˜ k , and the resolution of the uniform grid will change, the rest will be fixed.19
00 200 40010 − − − − − − N u R e l a t i v e ‘ – e rr o r Wu C , Wu C , Wu C , Wu C , Wu C , O ( N − u ) O ( N − u ) 100 200 40010 − − − − − − N u R e l a t i v e ‘ – e rr o r ˜ k = min( b√ P − . c , O ( N − u ) Figure 8: Error in numerical solution for the Poisson equation with right hand side given by (46), but with local extensions given by analyticexpression. Left: relative (cid:96) error as a function of N u in loglog-scale for various compactly supported RBF given in Table 1. Right: relative (cid:96) erroras a function of N u in loglog-scale with ˜ k set by (47). − − − − − − β min R e l a t i v e ‘ – e rr o r
40 Vogel nodes80 Vogel nodes120 Vogel nodes 50 100 150 20010 − − − − − − M R e l a t i v e ‘ – e rr o r P = 8 P = 12 P = 20 P = 32 P = 44 Figure 9: Error in numerical solution for the Poisson equation with f given by (46). E ff ect of parameter choices for computing local extensionon each patch, with ˜ k set by (47). Left: Relative (cid:96) error as a function of β min , defined below (44), in semilog-scale. Right: Relative (cid:96) error asfunction of number of Vogel nodes M , for di ff erent values of P (45).
20n the left plot of Figure 8 the convergence of the relative (cid:96) error is plotted in loglog–scale as a function of N u for various compactly supported RBF used to construct w . Recall that an RBF in C k , ˜ k is of regularity k at origin and˜ k at the boundary of its support. Initially the error for C , converges super algebraically and is followed by a tailwith algebraic convergence, as expected. The order of convergence for the algebraic tail is inherited by the the RBFsregularity at the edge of its support. However, this is only true if we never evaluate any RBF in a neighbourhoodaround its origin. Compare the curves corresponding to Wu C , and Wu C , in the left plot in Figure 8. The formerhas an algebraic tail with slope −
5, while the latter has −
6. For Wu C , the error is dominated by the algebraic tail oforder 5 for the entire spectrum of N u . This indicates that the error associated with resolving the weights w is largest.We observe that the error for Wu function C , converges super algebraically down to a relative error of 10 − , butrequires a fine grid resolution to do so. First at approximately N u =
260 is Wu function C , the better choice; thiscorresponds to P ≈
35 . These results suggest that a higher rate of convergence can be obtained by picking an optimalRBF to construct the weight function for each N u , compared to using the same for all N u . Heuristically we have foundthat with ˜ k = min (cid:16)(cid:106) √ P − . (cid:107) , (cid:17) , (47)where (cid:98) x (cid:99) gives the greatest integer less than or equal to x , we essentially obtain algebraic convergence correspondingto O ( N − u ). This is evident from the right plot in Figure 8. Various numerical simulations confirm that the optimalchoice for a given N u is roughly the same even for di ff erent length scales and other functions f . This will be shown insubsections 7.2 and 7.3. Unless stated otherwise, in subsequent numerical experiments ˜ k is given by (47).From now on we the create local extension by (38), meaning M needs to be set. The left plot in Figure 9 showsthe error (41) as function of β min . Typically β min = ffi cient, and little is gained by increasing the number of datalocations further. Given a fine uniform grid one can thus downsample for the interpolation problem. Solving a leastsquares problem is executed e ffi ciently in M atlab , for reference: computing A \ Y for A ∈ R × , Y ∈ R withrandomised elements takes approximately 0 . . (cid:96) error is plotted as a function of M for di ff erent P . Clearly the error decreases exponen-tially initially, but then level out around M ≈ P . As safety measure we use 4 P , but note that for small N u the resulting β min may be less than three. In such case, we adjust M accordingly: the number of uniform grid points inside Ω in agiven partition can be approximated by π P /
2. To obtain a least–squares system overdetermined by at least a factorof two, M should be less than half of that, e.g. (0 . / π P /
2. Thus the scheme we apply is M = min (0 . π P / , P ) , (48)although a majority of numerical settings yield M = P , since 0 . π P / = P for P ≈ .
4. Note that neither (47) nor(48) are optimal, but meant as a guide to select the parameters.21 − − − − − − N ∂ Ω R e l a t i v e ‘ – e rr o r Figure 10: Relative (cid:96) error as a function of the number of Gauss–Legendre panels N ∂ Ω in semilog-scale. Results are for solving the Poissonequation with f given by (46), with local extensions given by analytic expression. In Figure 10 the relative (cid:96) error is plotted against the number of Gauss–Legendre panels. Thus we let N ∂ Ω varyfor N u =
400 and other parameters chosen as before. Since the amount of panels sets the discrete representation forthe boundary, which is used to sort points as in Ω or E , the evaluation grid is pruned. Only points considered in Ω for the all the investigated resolutions are kept. We see the expected rapid convergence as the number of panels isincreased. : The Poisson equation on a multiply connected domain We replicate the setting for the most di ffi cult numerical experiment performed in [4], but give the details here forconvenience. For this experiment we study only the error (41) as a function of N u . We consider the right hand side f ( x , y ) = −
200 sin (10( x + y )) + + x − e − x (49)for the Poisson equation (4)–(5) on a complex multiply connected domain: for the outer boundary the non–zero coef-ficients for (43) are c = . d = c = c = c = . c = . R = n =
1. For the inner boundary we set c = . c = d = c = c = . R = n = − y –axis and is visualised in Figure 11. The cov-ering consists of 38 non–zero partitions on the outer boundary and 9 partitions on the inner boundary. Di ff erent radiusfor the extension partitions along the outer boundary and inner boundary can be used, which requires the precomputa-tion two di ff erent matrices A . For simplicity we let all extension matrices have the same radius, therefore they overlapmore along the boundary of the cavity. The regular covering of Ω is seen in Figure 11.For the numerical setting we let L = .
4, use 64 and 44 Gauss–Legendre panels for the outer boundary and forthe inner boundary, respectively. The radius for the extension partitions R p is 0 . N u ranging from 10 to 10 . We apply (47) and (48) to pick ˜ k and M . An extension f e of (49) can beseen in Figure 11, corresponding to N u = Ω and almost looks discontinuous. The extension of f into the cavity seems to mimic f remarkably well. However, afew remarks concerning the extension f e in the cavity of Ω are in order: It is not obvious that no zero partitions arerequired in the cavity to obtain a well–behaved extension. Recall that far away from the boundary the extension can22 igure 11: Left: Regular covering of Ω from subsection 7.2. Red circles are extension partitions, black are zero–partitions. Red and purple blackare centres for the extension partitions and zero partitions, respectively. Right: Extension f e by PUX of f , given by (49). grow dramatically. This is usually an indication that smaller partitions are needed. Here we ended up with a settingwhere every point in the cavity belongs to an extension partition and the extension is well–behaved. Thus there is noneed for zero partitions, whereas they will indeed be needed in example 3.For the resolution N u =
700 the numerical solution u and the corresponding pointwise relative error are plotted inFigure 12. In Figure 13 a convergence plot as a function of N u is shown. As for the simpler setting we obtain an orderof convergence corresponding to O ( N − u ). This is due to the shift to a Wu function of higher regularity at suitablepoints as N u increases. Thus the estimate (47) appears to be useful for more di ffi cult numerical settings as well. Forcomparison, Askham et al. in [4] obtained a relative max error around 10 − for an adaptive grid with in total 10 points with, convergence of order 3. In their embedded boundary-setting a C –extension is constructed by solving theLaplace equation with f as boundary condition, as briefly explained in the introduction.In Figure 13 the boundary condition g and modified boundary conditions g − u P are plotted, where the latter ispassed as input to the integral equation solver. We observed in several simulations that g − u P is not harder to resolvethan g , meaning one can choose the discretisation of ∂ Ω a priori based on the originally stated boundary conditions g .23 igure 12: Left: Numerical solution u to the Poisson equation with f from (49) for N u = –error for N u = . × − .
100 200 400 600 800100010 − − − − − − N u R e l a t i v e ‘ – e rr o r ˜ k = min( b√ P − . c , O ( N − u ) 78910 Exterior ∂ Ω Interior ∂ Ω B o und a r yv a l u e g | ∂ Ω ext g | ∂ Ω int ( g − u P ) | ∂ Ω ext ( g − u P ) | ∂ Ω int Figure 13: Results for solving the Poisson equation with (49). Left: Relative error as function of uniform grid resolution N u , with ˜ k and M set by(47) and (48). Right: The boundary values g and modified boundary values g − u P on the exterior and interior boundary segments. .3. Example : The Poisson equation on a domain with a larger cavity and more oscillating right hand side The third test features the right–hand side f ( x , y ) = − (cid:88) i = i e − √ i (cos (2 i x ) + cos (2 i y )) (50)for the Poisson equation (4)–(5). It oscillates with high frequency around the boundaries and is thus overall harder toresolve than (49). The domain is again multiply connected and the non–zero coe ffi cients for the outer boundary are c = c − = d − = . R = n =
1. For the inner boundary we set c = c − = d − = . R = . b = . n = − L = .
54, 84 Gauss-Legendre panels and 82 partitions for the outer boundary and 40 Gauss-Legendrepanels and 23 partitions for the inner boundary. Once again N u ranges from 10 to 10 , ˜ k and M is set by (47) and (48)and all extension partitions have the same radius, R p = .
12. Consequently the partitions along the inner boundary donot cover the cavity, thus f e must be suppressed to zero inside. See Figure 14 for f e and the covering, which shouldbe compared with the covering from the previous example, see Figure 11.The numerical solution and pointwise relative error are plotted in Figure 15. The right–hand side of Figure 16features a plot of the error (41) as a function of N u . We observe the same trend for the convergence as in the twoprevious examples, but require a finer grid in this setting to reach an error of O (10 − ). This is due to (50) beingharder to resolve and smaller partitions are required to obtain good approximations of the interpolants. Consequently f e has less distance over which it goes to zero, hence a finer resolution is required to resolve it. Figure 14: Left: Regular covering of Ω from subsection 7.3. Red circles are extension partitions, black are zero–partitions. Right: Extension f e byPUX of f , given by (50). igure 15: Left: Numerical solution u to the Poisson equation with f from (50) for N u = –error for N u = . × − .
100 200 400 600 800100010 − − − − − − N u R e l a t i v e ‘ – e rr o r ˜ k = min( b√ P − . c , O ( N − u ) 0 . . . . . . . ∂ Ω Interior ∂ Ω B o und a r yv a l u e g | ∂ Ω ext g | ∂ Ω int ( g − u P ) | ∂ Ω ext ( g − u P ) | ∂ Ω int Figure 16: Results for solving the Poisson equation with right hand side given by(50). Left: Relative error as function of uniform grid resolution N u , with ˜ k and M set by (47) and (48). Right: The boundary values g and modified boundary values g − u P on the exterior and interior boundarysegments. .4. Performance and e ffi ciency To give an idea of the computational cost of function extension by PUX, some timings are provided in Table 2,where each time is the mean of ten runs for the numerical setting presented in subsection 7.1. Thus the number ofpartitions is constant, but the grid is refined, with ˜ k and M set thereafter. For reference we have included solving thefree–space Poisson equation with the method presented in subsection 2.2. We timed: • Building A . This is done once and the same A is reused for all extension partitions. Note that RBF–QR is usedwhether needed or not. • The construction of the local extensions { f ei } N p i = . This includes: separating A into A Ω and A E as in (36), solvingthe least–squares problem A Ω F = ˜ F Ω and evaluating A E F to obtain ˜ F E in (37). • Numerically evaluate f e as in (38). For the extension partitions the precomputed values for w are reused. Forthe zero–partitions with radius other than R p new evaluations are required. • Solve for u P . All steps are required steps are included in the timings, even the precomputation of ˜ K , see (22).When excluding precomputation the time is about one tenth the tabulated result.When timing the processes above we excluded: identifying points on the uniform grid within R p of the partition cen-tre, identifying points as in Ω or E and precomputing the weights w .From Table 2 we read that being able to compute A once and reuse is vital for the e ffi ciency of PUX. Solv-ing the least–squares problem A Ω F = ˜ F Ω , see equation (36), on each partition is the most time consuming processof constructing { f ei } N p i = . This is done with a QR decomposition of A Ω and backward substitution, thus scaling as2 N u , i M − / M + N u , i M , where N u , i is the number of data points in partition Ω i . However, for N u less than 400 thelocal systems are not large enough for this scaling to be dominating. With downsampling to reduce β min , i.e. N u , i , thecost of constructing { f ei } N p i = is comparable to the cost of solving for u P using an e ffi cient FFT–based solver. Observethat in our implementation { f ei } N p i = and the weights are not computed in parallel, but doing so is trivial. The cost of theRBF–QR algorithm increases with ε and the spacing between centres, both of which are comparatively large in thisexample. Thus these timings represent a ”worst case” scenario. Table 2: Timings in [ s ] of steps for PUX and solving the free–space Poisson equation for example 7.1. Measurements are based on the mean of tenruns. Task N u = N u = N u = A . . . { f ei } N p i = . . . Evaluate f e . . . u P . . . Value for β min ≈ .
2. With downsampling to β min ≈ . . . Conclusions In this paper, we have introduced the novel method Partition of Unity Extension, or PUX, for numerically extend-ing a function f outside of the complex multiply connected domain it is given on. The main strength of the PUXmethod is that global regularity is obtained by solving local problems. This is achieved by blending local extensionson circular patches with a partition of unity function. Its regularity can be chosen and determines the global regularity.Moreover, by introducing what we call zero partitions when defining this function, a compactly supported functionextension is obtained.The performance of PUX has been thoroughly investigated by solving the Poisson equation on multiply connectedcomplex domains. The particular solution comes from solving the free–space Poisson equation for the extended f given by PUX. Thereafter the Laplace equation is solved on the given domain using a boundary integral method, withboundary conditions modified according to the particular solution. The final solution is the sum of the two. We havedemonstrated how the various parameters are related and how to set them, thereby significantly reducing the parameterspace. By using these guidelines the error in the solution to the Poisson equation converges to 10 − with an order ofabout O ( N − u ), where N u × N u is the total number of uniform grid points. This shows that the method described in thepaper, without any additional numerical treatment, can provide precision down to round o ff .The PUX method is simple to implement, with the exception of the RBF–QR method used to compute the matrix A as defined in section 4. For this, an open–source implementation is available online [33]. The RBF–QR algorithmis computationally costly as compared to the least squares solve that follows. By centring all circular patches on auniform grid point, the matrix A however needs to be computed only once, which yields a great reduction in the totalcomputational cost. The local least squares problems on each patch can be solved accurately with standard methodsand are typically quite small, with 100-400 unknowns.The PUX algorithm in this paper has been designed for when the data of f is defined on a uniform grid. Generallya finer grid is required to resolve the extension as it goes to zero, compared to resolve f on Ω . Since the grid is uniformthe resolution is often set by the extension. It would be possible to define the PUX method also for an adaptive grid.The implementation can be changed to allow for patches of di ff erent sizes since this is no restriction for the method.The local extension with radial basis functions as well as the partition of unity blending naturally extends to threedimensions. Circular patches will become spherical patches, but nothing conceptually changes. A paper describingthe implementation of PUX in three dimensions is forthcoming.
9. Acknowledgements
This work has been supported by the Swedish Research Council under Grant No. 2015–04998 and by the G¨oranGustafsson Foundation for Research in Natural Sciences and Medicine and is gratefully acknowledged.
References [1] M. C. A. Kropinski, B. D. Quaife, Fast integral equation methods for the modified Helmholtz equation, Journal of Computational Physics230 (2011) 425–434.[2] M. C. A. Kropinski, B. D. Quaife, Fast integral equation methods for Rothe’s method applied to the isotropic heat equation, Computers andMathematics with Applications 61 (2011) 2436–2446.[3] L. Greengard, M. C. Kropinski, An integral equation approach to the incompressible navier–stokes equations in two dimensions, SIAMJournal on Scientific Computing 20 (1998) 318–336.[4] T. Askham, A. Cerfon, An adaptive fast multipole accelerated poisson solver for complex geometries, Journal of Computational Physics 344(2017) 1 – 22.[5] N. Albin, O. P. Bruno, A spectral FC solver for the compressible Navier-Stokes equations in general domains I: Explicit time-stepping,Journal of Computational Physics 230 (2011) 6248–6270.[6] D. B. Stein, R. D. Guy, B. Thomases, Immersed boundary smooth extension (ibse): A high-order method for solving incompressible flows inarbitrary smooth domains, Journal of Computational Physics 335 (2017) 155 – 178.[7] O. P. Bruno, M. Lyon, High-order unconditionally stable fc-ad solvers for general smooth domains i. basic elements, Journal of ComputationalPhysics 229 (2010) 2009 – 2033.
8] M. Lyon, O. P. Bruno, High-order unconditionally stable fc-ad solvers for general smooth domains ii. elliptic, parabolic and hyperbolic pdes;theoretical considerations, Journal of Computational Physics 229 (2010) 3358 – 3381.[9] D. Shiroko ff , J. C. Nave, A Sharp-Interface Active Penalty Method for the Incompressible Navier–Stokes Equations, Journal of ScientificComputing 62 (2015) 53–77.[10] S. H. Lui, Spectral domain embedding for elliptic PDEs in complex domains, Journal of Computational and Applied Mathematics 225 (2009)541–557.[11] O. Bruno, M. Lyon, High-order unconditionally stable FC-AD solvers for general smooth domains I. Basic elements, Journal of Computa-tional Physics 229 (2010) 2009–2033.[12] N. Albin, O. P. Bruno, M. Lyon, O. P. Bruno, High-order unconditionally stable FC-AD solvers for general smooth domains II. Elliptic,parabolic and hyperbolic PDEs; theoretical considerations, Journal of Computational Physics 229 (2010) 3358–3381.[13] N. Albin, O. P. Bruno, A spectral fc solver for the compressible navierstokes equations in general domains i: Explicit time-stepping, Journalof Computational Physics 230 (2011) 6248 – 6270.[14] K. Atkinson, The Numerical Solution of Integral Equations of the Second Kind, Cambridge Monographs on Applied and ComputationalMathematics (Book 4), Cambridge University Press, 1997.[15] L. af Klinteberg, A. K. Tornberg, Error estimation for quadrature by expansion in layer potential evaluation, Advances in ComputationalMathematics 43 (2017) 195–234.[16] J. Helsing, R. Ojala, On the evaluation of layer potentials close to their sources, Journal of Computational Physics 227 (2008) 2899–2921.The paper appeared electronically November 28, 2007, and subsequently in the paper issue of the journal February 20, 2008. The informationabout a ffi liations in this record was updated in December 2015. The record was previously connected to the following departments: NumericalAnalysis (011015004).[17] A. Greenbaum, L. Greengard, G. McFadden, Laplace’s Equation and the Dirichlet-Neumann Map in Multiply Connected Domains, 1993.[18] L. Evans, Partial Di ff erential Equations, Graduate studies in mathematics, American Mathematical Society, 2010.[19] F. Vico, L. Greengard, M. Ferrando, Fast convolution with free-space Green’s functions, Journal of Computational Physics 323 (2016)191–203.[20] L. af Klinteberg, D. S. Shamshirgar, A.-K. Tornberg, Fast ewald summation for free-space stokes potentials, Research in the MathematicalSciences 4 (2017) 1.[21] L. Greengard, J.-Y. Lee, Accelerating the nonuniform fast Fourier transform, SIAM Rev. 46 (2004) 443–454.[22] R. Schaback, Native hilbert spaces for radial basis functions i, in: New Developments in Approximation Theory, number 132 in InternationalSeries of Numerical Mathematics, Birkhauser Verlag, 1997, pp. 255–282.[23] B. Fornberg, E. Larsson, N. Flyer, Stable computations with Gaussian radial basis functions, SIAM J. Sci. Comput. 33 (2011) 869–892.[24] G. F. Fasshauer, Meshfree Approximation Methods with MATLAB, World Scientific Publishing Co., Inc., River Edge, NJ, USA, 2007.[25] H. Wendland, Error estimates for interpolation by compactly supported radial basis functions of minimal degree, Journal of ApproximationTheory 93 (1998) 258 – 272.[26] E. Larsson, V. Shcherbakov, A. Heryudono, A least squares radial basis function partition of unity method for solving pdes, SIAM Journalon Scientific Computing (2017).[27] D. Shepard, A two-dimensional interpolation function for irregularly-spaced data 23 (1968) 517–524.[28] C. Rieger, B. Zwicknagl, Sampling inequalities for infinitely smooth functions, with applications to interpolation and machine learning,Advances in Computational Mathematics 32 (2009) 103–129.[29] E. Larsson, B. Fornberg, Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions,Computers & Mathematics with Applications 49 (2005) 103–130.[30] H. Wendland, Fast evaluation of radial basis functions: Methods based on partition of unity, in: Approximation Theory X: Wavelets, Splines,and Applications, Vanderbilt University Press, 2002, pp. 473–483.[31] L. Trefethen, Spectral Methods in MATLAB, Society for Industrial and Applied Mathematics, 2000.[32] L. Ying, G. Biros, D. Zorin, A high-order 3D boundary integral equation solver for elliptic PDEs in smooth domains, Journal of ComputationalPhysics 219 (2006) 247–275.[33] E. Larsson, Radial basis function interpolation - rbf-qr, 2017. [Online; accessed 2-May-2017].erential Equations, Graduate studies in mathematics, American Mathematical Society, 2010.[19] F. Vico, L. Greengard, M. Ferrando, Fast convolution with free-space Green’s functions, Journal of Computational Physics 323 (2016)191–203.[20] L. af Klinteberg, D. S. Shamshirgar, A.-K. Tornberg, Fast ewald summation for free-space stokes potentials, Research in the MathematicalSciences 4 (2017) 1.[21] L. Greengard, J.-Y. Lee, Accelerating the nonuniform fast Fourier transform, SIAM Rev. 46 (2004) 443–454.[22] R. Schaback, Native hilbert spaces for radial basis functions i, in: New Developments in Approximation Theory, number 132 in InternationalSeries of Numerical Mathematics, Birkhauser Verlag, 1997, pp. 255–282.[23] B. Fornberg, E. Larsson, N. Flyer, Stable computations with Gaussian radial basis functions, SIAM J. Sci. Comput. 33 (2011) 869–892.[24] G. F. Fasshauer, Meshfree Approximation Methods with MATLAB, World Scientific Publishing Co., Inc., River Edge, NJ, USA, 2007.[25] H. Wendland, Error estimates for interpolation by compactly supported radial basis functions of minimal degree, Journal of ApproximationTheory 93 (1998) 258 – 272.[26] E. Larsson, V. Shcherbakov, A. Heryudono, A least squares radial basis function partition of unity method for solving pdes, SIAM Journalon Scientific Computing (2017).[27] D. Shepard, A two-dimensional interpolation function for irregularly-spaced data 23 (1968) 517–524.[28] C. Rieger, B. Zwicknagl, Sampling inequalities for infinitely smooth functions, with applications to interpolation and machine learning,Advances in Computational Mathematics 32 (2009) 103–129.[29] E. Larsson, B. Fornberg, Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions,Computers & Mathematics with Applications 49 (2005) 103–130.[30] H. Wendland, Fast evaluation of radial basis functions: Methods based on partition of unity, in: Approximation Theory X: Wavelets, Splines,and Applications, Vanderbilt University Press, 2002, pp. 473–483.[31] L. Trefethen, Spectral Methods in MATLAB, Society for Industrial and Applied Mathematics, 2000.[32] L. Ying, G. Biros, D. Zorin, A high-order 3D boundary integral equation solver for elliptic PDEs in smooth domains, Journal of ComputationalPhysics 219 (2006) 247–275.[33] E. Larsson, Radial basis function interpolation - rbf-qr, 2017. [Online; accessed 2-May-2017].