Corrected approximation strategy for piecewise smooth bivariate functions
aa r X i v : . [ m a t h . NA ] D ec Corrected approximation strategy for piecewise smoothbivariate functions
Sergio Amat, ∗ David Levin, † Juan Ruiz- ´Alvarez ‡ Abstract
Given values of a piecewise smooth function f on a square grid within a domainΩ, we look for a piecewise adaptive approximation to f . Standard approximationtechniques achieve reduced approximation orders near the boundary of the domainand near curves of jump singularities of the function or its derivatives. The idea usedhere is that the behavior near the boundaries, or near a singularity curve, is fullycharacterized and identified by the values of certain differences of the data across theboundary and across the singularity curve. We refer to these values as the signatureof f . In this paper, we aim at using these values in order to define the approximation.That is, we look for an approximation whose signature is matched to the signatureof f . Given function data on a grid, assuming the function is piecewise smooth,first, the singularity structure of the function is identified. For example in the 2-Dcase, we find an approximation to the curves separating between smooth segmentsof f . Secondly, simultaneously we find the approximations to the different segmentsof f . A system of equations derived from the principle of matching the signatureof the approximation and the function with respect to the given grid defines a firststage approximation. An second stage improved approximation is constructed usinga global approximation to the error obtained in the first stage approximation. Given data values of a piecewise smooth function on a square grid within a domain Ω,one looks for high order approximation to f . Standard approximation techniques achievereduced approximation orders near the boundary of the domain and near curves of jump ∗ Departamento de Matem´atica Aplicada y Estad´ıstica. Universidad Polit´ecnica de Cartagena(Spain). e-mail: [email protected] † School of Mathematical Sciences. Tel-Aviv University, Tel-Aviv (Israel). e-mail: [email protected] ‡ Departamento de Matem´atica Aplicada y Estad´ıstica. Universidad Polit´ecnica de Cartagena(Spain). e-mail: [email protected]
This work was funded by project 20928/PI/18 (Proyecto financiado por la Comunidad Aut´onomade la Regi´on de Murcia a trav´es de la convocatoria de Ayudas a proyectos para el desarrollo de investi-gaci´on cient´ıfica y t´ecnica por grupos competitivos, incluida en el Programa Regional de Fomento de laInvestigaci´on Cient´ıfica y T´ecnica (Plan de Actuaci´on 2018) de la Fundaci´on S´eneca-Agencia de Cienciay Tecnolog´ıa de la Regi´on de Murcia) and by the Spanish national research project PID2019-108336GB-I00. f across the singularpoint s . This approach is not easily transferred into the multivariate case where thesingularities of the function may occur along curves in the 2D case or across surfacesin the 3D case. We propose here an alternative way for approaching the 1D case, away which is nalurally transferable into the multivariate case. Within this approach weapply a signature operator which is used both for identifying the singularity locationand for deriving a smooth data to be used in the second approximation stage. In this section, we present the main idea for univariate function approximation. Wehave chosen to work with spline approximation, but the same idea can be used forapproximation with other basis functions, with similar performance. We describe thefitting strategy using B-spline basis functions and develop the computation algorithm.
Let f be a piecewise smooth function on [0 , f ∈ C m [0 , s ] and f ∈ C m ( s, f ( x ) = ( f ( x ) x < s,f ( x ) x ≥ s. (2.1)We are given function values { f i ≡ f ( ih ) } Ni =0 , h = 1 / ( N − ,
1] with zero values, { f i = 0 } − i = − k , { f i = 0 } N + ki = N +1 .An example is shown in blue in Figure 1. The process of approximation suggestedhere requires finding the position s of the discontinuity of the function. As described in[4], in case of discontinuity at s , the interval ( jh, ( j + 1) h ) containing s can be detectedif h ≤ h c , h c := | [ f ] | t ∈ R \{ s } | f ′ ( t ) | , (2.2)where [ f ] is the jump in the function f . In the following we assume that this interval isidentified.We suggest that significant information about the function, and especially its behav-ior near the boundaries and near the singularity point s is encrypted in the sequence of2ifferences of ¯ f = { f i } N + ki = − k . Consider the vector ¯ d k ( ¯ f ) of forward k th-order differencesof ¯ f , with the elements d ki = ∆ k f i , i = − k, ..., N. (2.3)These differences are going to be tipically O (1) near x = 0 and x = 1 and also near s ,and for k ≤ m they O ( h k ) away from the end points and the singular point. Fifth orderdifferences of the data in Figure 1 are shown in Figure 2. Away from the singularity andthe end points, the values are of magnitude ∼ − . Clearly, the values of the five-orderdifferences show us the critical points for the approximation of f . Furthermore, theyinclude important information about the function near the critical points. Definition 2.1. The signature of a function - σ k ( g ) Let g be a function on [0 , . We denote by ¯ g the vector of values of g at the points { ih } Ni =0 , padded by k zero values on each side as above. We refer to the vector of theforward k th-order differences of the data vector ¯ g as the signature of g , and we denoteit as σ k ( g ) = ¯ d k (¯ g ) , σ k ( g ) ∈ R N + k . -0.2 0 0.2 0.4 0.6 0.8 1 1.2-1.5-1-0.500.51 The extended non-smooth function f + the B-splines
Figure 1: An example of a non-smooth function, with zero padding on both sides, to-gether with the B-spline basis functions used to the right and to the left of the singularity.3
20 40 60 80 100 120-4-3-2-1012345
Figure 2: The signature of f using fifth order differences. We choose to build the approximation using m th-order spline functions, representedby the B-spline basis. Let B [ m ] d ( x ) be the B-spline of order m with equidistant knots {− md, ..., − d, − d, } . N d = 1 /d + m − , • The locality of the B-spline basis functions. • Their approximation power, i.e., if f ∈ C m [ a, b ], there exists a spline S [ m ] d suchthat k f − S [ m ] d k ∞ , [ a,b ] ≤ Cd m +1 .An example of sixth-order B-splines basis functions used in the numerical example belowis shown in Figure 1. The B-splines used to approximate f are shown in green, and theB-splines used to approximate f appear in red.The approach we suggest here involves finding approximations to f and f simulta-neously, using two separate spline approximations: S ≡ S [ m ] d | [0 ,s ] ( x ) = N d X i =1 a i B [ m ] d ( x − id ) | [0 ,s ] ∼ f , (2.4)and S ≡ S [ m ] d | ( s, ( x ) = N d X i =1 a i B [ m ] d ( x − id ) | ( s, ∼ f . (2.5)Following the framework presented in [1], we present below a two stage approxi-mation algorithm: In the first stage the data is made smooth by subtracting a proper4iecewise smooth approximation. In the second stage a smooth approximation to thesmooth data is generated and the piecewise approximation used in the first stage isbeing added to the smooth approximation to produce the final approximation. In thispaper the first stage approximation is a piecewise a spline S whose signature, σ k ( S ),matches the signature of f , σ k ( f ). We define S by a combination of the above S and S . Since S dependes upon the coefficients { a i } of S and the coefficients { a i } of S ,the matching process finds all these unknowns coefficients by solving the minimizationproblem, (cid:2) { a i } N d i =1 , { a i } N d i =1 (cid:3) = arg min k σ k ( f ) − σ k ( S ) k . (2.6)The minimization is linear w.r.t. the other unknowns. Using the approximation powerof m th order splines, we can deduce that the minimal value of k σ k ( f ) − σ k ( S ) k is O ( N d m ).We denote by B i ≡ B [ m ] d ( ·− id ) | [0 ,s ] the restriction of B [ m ] d ( ·− id ) to the interval [0 , s ],and by B i ≡ B [ m ] d ( · − id ) | ( s, the restriction of B [ m ] d ( · − id ) to the interval ( s, { B i } and { B i } into one sequence { B i } N d i =1 , and denote their signatures by σ i = σ k ( ¯ B i ) , i = 1 , .., N d . The induced systemof linear equations for the splines’ coefficients a = ( { a i } N d i =1 , { a i } N d i =1 ) is Aa = b definedas follows: A i,j = h σ i , σ j i , ≤ i, j ≤ N d , (2.7)and b i = h σ i , σ ( f ) i , ≤ i ≤ N d . (2.8) Remark 2.2.
Due to the locality of the B-splines, some of the basis functions { B i } and { B i } may be identicaly . It thus seems better to use only the non-zero basis functions.From our experience, since we use the general inverse approach for solving the systemof equations, using all the basis functions gives the same solution. As demonstrated inthe 2-D procedure below, other approximation spaces such as tensor product polynomialsor trigonometric functions, may be used, with similar approximation results. Remark 2.3.
The above construction can be carried out to the case of several singularpoints.
We consider the approximation of the function f ( x ) = ( x − x ≥ . , x − + ( x + 1 . cos (4 x ) x < . , (2.9)given its values on a uniform grid with h = 0 .
01. We have used the signature of f definedby fifth-order forward differences of the extended data and computed an approximationusing fifth-degree splines with equidistant knots’ distance d = 0 .
1. For this case, the5atrix A is of size 32 ×
32, and rank ( A ) = 22. We have solved the linear system usingMatlab general inverse procedure pinv, together with an iterative refinement algorithm(described in [9], [7]) to obtain a high precision solution. We denote the resultingpiecewise spline approximation by S ∗ .The results are shown in Figure 3 depicting theapproximation error f − S ∗ , showing maximal error ∼ . × − . -0.2 0 0.2 0.4 0.6 0.8 1 1.205101520 10 -5 Figure 3: The first stage approximation error f − S ∗ . Let us estimate the minimal value of F ( S ) ≡ k σ k ( f ) − σ k ( S ) k . We do it by find-ing a function ˜ S for which F ( ˜ S ) is small. Using the value F ( ˜ S ) we can deduce somequantitative estimates for the function S ∗ which is the minimizer of F ( S ).Recalling that S is a piecewise spline function of order m ≥ k , it follows that both σ k ( f ) and σ k ( S ) are O ( h k ) away from the boundaries and the singularity, and so is σ k ( f − S ). Let the knots’ distance d be small enough such that the number of B-splinesinfluencing each of the intervals [0 , s ] and [ s,
1] is greater that 2 k + 2. We can find ˜ S which coincides with f at k + 1 data points at both ends of the interval [0 , s ], and ˜ S which coincides with f at k + 1 data points at both ends of the interval [ s, S of ˜ S and ˜ S satisfies σ k ( f − ˜ S ) = 0 near 0, near s and near 1. Combiningthis with the observation that σ ( f − ˜ S ) ≤ Ch k away from 0, s and 1, it follows that F ( ˜ S ) = k σ k ( f − ˜ S ) k ≤ C N h k . (2.10)Denoting by S ∗ the minimizer of k σ k ( f ) − σ k ( S ) k , k σ k ( f − S ∗ ) k ∞ = k σ k ( f ) − σ k ( S ∗ ) k ∞ ≤ C N h k . (2.11)6hus we have shown that the signature of S ∗ will be close to the signature of f . In orderto show that S ∗ is close to f we need to consider the inverse of the signature operator.Since f ( ih ) = S ∗ ( ih ) = 0 for i = − k, ..., −
1, it follows that | f ( ih ) − S ∗ ( ih ) | < C N h k , ≤ i ≤ k. Similarly, | f ( ih ) − S ∗ ( ih ) | < C N h k , N − k ≤ i ≤ N. Knowing the error bounds at points near 0 and near 1, and in view of the bound (2.11),it follows that | f ( ih ) − S ∗ ( ih ) | < C k Σ − k N h k , ≤ i ≤ N, (2.12)where Σ is the matrix representation of the linear operator defining the signature. Forexample, for k = 3 the signature is defined by third-order differences, and the relevantmatrix Σ is the N × N i − ,i = − , Σ i − ,i = 3 , Σ i,i = − , Σ i +1 ,i = 1 . It turns out that k Σ − k = O ( N ) which is quite bad to be used in (2.12), and it calls foran alternative definition of a signature, such that its inverse is bounded independentlyof N .Another issue is how close is S ∗ to f on [0 , S ∗ as the first approximation to f , and the second stage approximation isobtained by correcting the first one. The 1-D procedure developed in [1] is a two-stage procedure. In the first stage the non-smooth data is transformed into a smooth data by subtracting a one-sided polynomialwith appropriate jumps in its derivatives. The second stage is a correction step using astandard subdivision process for approximating the residual smooth data. Here againthe approximation S ∗ obtained by matching the signature of f is just the first stage incomputing the final approximation to f . The second stage is based upon the observationthat the error e = f − S ∗ , evaluated at the data points { ih } Ni =0 , forms a ‘smooth’data sequence. Applying an appropriate smooth approximation procedure to the data { e ( ih ) } Ni =0 , we obtain an approximation ˜ e ( x ) to e . The final corrected approximation isdefined as ˜ f ≡ S ∗ + ˜ e. (2.13)The approximation error f − ˜ f is the same as the error e − ˜ e . To estimate this error weshould examine how smooth is e . If f and its derivatives are discontinuous at s , then e = f − S ∗ would also be discontinuous at s . However, as shown below, the smaller thesignature of e = f − S ∗ is made, the smaller is the jump in e and its derivatives across s . To build the approximation to e we may use any univariate approximation method.However, it is simpler to understand the approximation error if we use a local approxi-mation method, such as subdivision or quasi-interpolation by splines. We assume that7he approximation to e is defined as˜ e ( x ) = N + k X i = − k e ( ih ) φ ( x/h − i ) , (2.14)where φ is of finite support, supp( φ ) = σ , and polynomials of degree ≤ ℓ are reproduced, X i ∈ Z p ( ih ) φ ( x/h − i ) = p ( x ) , p ∈ Π ℓ . (2.15)If ℓ < m ( m being the order of the spline S ∗ and the smoothness degree of f and f ), then at points x which are of distance ≥ σ from s and from 0 and 1, we have | e ( x ) − ˜ e ( x ) | ≤ Ch ℓ +1 . (2.16)Let us check the approximation power near s . A similar argument holds near 0 and 1.Assuming that jh ≤ s < ( j + 1) h we may also assume (by subtracting a polynomial ofdegree k −
1) that e ( ih ) = 0 , j − k < i ≤ j. (2.17)Using (2.11) and (2.17) it follows that e ( ih ) = O ( h k − ) , j < i ≤ j + ℓ. (2.18)Therefore, both e and ˜ e are O ( h k − ) near s , hence near s , | e ( x ) − ˜ e ( x ) | ≤ Ch k − . (2.19) Coming back to the numerical example in Section 2.2.1, we compute the correctedapproximation by applying cubic spline interpolation to the smoothed data and adding S ∗ to it. While the maximal error in the first stage approximation S ∗ is ∼ . × − ,the maximal error in the corrected approximation is ∼ . × − . Let f be a piecewise smooth function on [0 , , defined by the combination of two pieces f ∈ C m [Ω ] and f ∈ C m [Ω ], Ω = [0 , \ Ω . We assume that we are given functionvalues { f ij ≡ f ( ih, jh ) } Ni,j =0 , h = 1 / ( N − , with zero values, { f ij = 0 } − i = − k , { f ij = 0 } N + ki = N +1 , { f ij = 0 } − j = − k , { f ij = 0 } N + kj = N +1 . We donot know the position of the dividing curve separating Ω and Ω . We denote this curveby Γ, and we assume that it is a C m -smooth curve. As in the 1-D case, the existence ofa singularity curve in [0 , significantly influences standard approximation procedures,especially near Γ, and the approximation power also deteriorates near the boundaries.As in the univariate case, the approximation procedure described below is based uponmatching a signature of the function and the approximant. The computation algorithminvolves finding approximations to f and f simultaneously, followed by a correctionstep. The first stage is the approximation of Γ.8 -9 Figure 4: The corrected approximation error f − ˜ f . Γ Finding a good approximation of the singularity curve Γ is more involved thanfinding the singularity point s in the univariate case. To simplify the presentationwe assume that Ω and Ω are simply connected domains and the set of data points P = { ( ih, jh ) | ≤ i, j ≤ N } is consequently divided into two sets: ( P = { ( ih, jh ) ∈ Ω , ≤ i, j ≤ N } ,P = { ( ih, jh ) ∈ Ω , ≤ i, j ≤ N } . (3.1)As in the univariate case we assume that h is small enough to assure the detectionof the singularity interval along each horizontal and vertical line in [0 , . This involvesestimating a critical h c as in (2 .
2) along horizontal and vertical lines (as in [4]). Foreach 0 ≤ j ≤ N consider the data values along the horizontal line y = jh , { f ij } Ni =0 .As in the univariate case, we find the interval containing the singularity, if exists, anddenote the midpoint of this interval as s j . The points { ( s j , jh ) } found are at distance < h/ { ( ih, t i ) } which are nearΓ. An important outcome of this process is that it identifies and separate data pointsfrom both sides of the singularity curve. Another way of achieving this is suggested in[3]. Let us continue the description of the procedure alongside the following numericalexample: 9onsider the piecewise smooth function on [0 , with a jump singularity across thecurve Γ which is the quarter circle defined by ( x + 1) + ( y + 1) = 10. The test functionis shown in Figure 5 and is defined as f ( x, y ) = ( ( x + y + 2) cos (4 x ) + sin (4( x + y )) , ( x + 1) + ( y + 1) ≥ ,sin (4( x + y )) , ( x + 1) + ( y + 1) < . (3.2)Figure 5: The test function for the 2D non-smooth case.Let us denote by Q all the points { ( s j , jh ) } and { ( ih, t i ) } found as described abovefor h = 0 . Q (in red) in Figure 6, on top of the curve Γ(in green). Now we use these points to construct a tensor product cubic spline D ( x, y ),whose zero level curve defines the approximation to Γ. To construct D we first overlayon [0 , a net of m × m points, Q , Q ⊂ P . These are the green points displayed inFigure 7, for m = 9.To each point in Q we assign the value of its distance from the set Q , with a plussign for the points which are in P , and a minus sign for the points in P . To each pointin Q we assign the value zero. The spline function D is now defined by the least-squaresapproximation to the values at all the points Q ∪ Q . We have used here tensor productcubic spline on a uniform mesh with knots’ distance d = 0 .
25. It can be shown that thecoresponding normal equations are non-singular if d > /m . We denote the zero levelcurve of the resulting D as Γ , and this curve is depicted in yellow in Figure 7. It seemsthat Γ is already a good approximation to Γ (in green), and yet it may not separatecompletely the points P from the points P .10 Figure 6: The singularity curve Γ ∗ (green) and the points Q . Figure 7: The approximation of the singularity curve Γ (yellow).
As in the univariate case, the approximation strategy is based upon matching somesignature of the approximation to the signature of the given function data.
Definition 3.1. The signature of a function - σ ( g )11 et g be a function on [0 , . We denote by ¯ g the matrix of values of g at the points { ih, jh ) } Ni =0 , padded by two layers of zero values on each side as above. We define thesignature of g as the matrix of discrete biharmonic operator applied to g , namely, σ ( g ) = ∆ h (¯ g ) . (3.3)Considering the test function (3.2), discretized using a mesh size h = 0 .
01, paddedby ten layers of zero values all arround, its signature is displayed in Figure 8 below.Figure 8: The signature σ ( f ) of the test function (3.2).The first stage of the approximation process is similar to the construction in theunivariate case define by equations (2.4), (2.5), (2.6). We look here for an approximation S to f which is a combination of two bivariate splines components: S ( x, y ) = N d X i =1 N d X j =1 a ij B ij ( x, y ) , ( x, y ) ∈ ˜Ω , (3.4) S ( x, y ) = N d X i =1 N d X j =1 a ij B ij ( x, y ) , ( x, y ) ∈ ˜Ω , (3.5)where B ij ( x, y ) = B [ m ] d ( x − id ) B [ m ] d ( y − jd ) , (3.6)and ˜Ω = { ( x, y ) | D ( x, y ) ≥ , ( x, y ) ∈ [0 , } , (3.7)12Ω = { ( x, y ) | D ( x, y ) < , ( x, y ) ∈ [0 , } . (3.8) Definition 3.2. The signature of S - σ ( S ) We denote by ¯ S the matrix of values (P N d i =1 P N d j =1 a ij B ij ( x, y ) , ( x, y ) ∈ P , P N d i =1 P N d j =1 a ij B ij ( x, y ) , ( x, y ) ∈ P . (3.9) padded by two of layers zero values on each side as above. The signature of S is σ ( S ) =∆ h ( ¯ S ) . Having defined the signature of f and of S , we now look for an approximation S such that the signatures of f and S are matched in the least-squares sense: (cid:2) { a ij } N d i,j =1 , { a ij } N d i,j =1 (cid:3) = arg min k σ ( f ) − σ ( S ) k . (3.10) For ≤ i, j ≤ N d let ¯ B ij the matrix of N × N values ( B ij ( x, y ) , ( x, y ) ∈ P , , ( x, y ) ∈ P , (3.11) padded by zeros. The signature of ¯ B ij , σ ( ¯ B ij ) include the matrix of ∆ h values of B ij on P . Similarily we define σ ( ¯ B ij ) using the values of B ij on P : ( B ij ( x, y ) , ( x, y ) ∈ P , , ( x, y ) ∈ P . (3.12) We rearrange the signatures { σ ( ¯ B ij ) } N d i,j =1 , { σ ( ¯ B ij ) } N d i,j =1 as a vector σ ( ¯ B ) of length N d of signatures. Rearranging the unknwons { a ij } N d i,j =1 , { a ij } N d i,j =1 as a vector a of length 2 N d , and re-arranging each signature as a vector of length N , the linear system of normal equationsfor the spline coefficients is Aa = b , where A k,ℓ = h σ ( ¯ B ) k , σ ( ¯ B ) ℓ i , ≤ k, ℓ ≤ N d , (3.13)and b k = h σ ( ¯ B ) k , σ ( f ) i , ≤ k ≤ N d . (3.14)The above framework is applied to the numerical example with the test function(3.2), with h = 0 .
01 and sixth-order tensor product splines with knots distance d = 0 . S ∗ , combined of two segments, S ∗ on ˜Ω and S ∗ on ˜Ω .In Figure 9 we show the error e = f − S ∗ at the grid points. We observe that e is quite smooth, which means that the piecewise spline approximation S ∗ has capturedwell the singularity structure of f , both across Γ and along the boundaries.13igure 9: The error in the first stage approximation (9). After computing the first stage approximation S ∗ obtained by matching the signa-ture of f , the second stage is based upon the observation that the error e = f − S ∗ ,evaluated at the data points { ( ih, jh ) } Ni,j =0 , forms a ‘smooth’ data sequence. Applyingan appropriate smooth approximation procedure to the data { e ( ih, jh ) } Ni,j =0 , we obtainan approximation ˜ e ( x ) to e . The final corrected approximation is defined as˜ f ≡ S ∗ + ˜ e. (3.15)Continuing the above numerical example, we have applied fifth order tensor productspline interpolation to e . Figure 10 depicts the values of the error in the final correctedapproximation ˜ f , evaluated on a fine grid. We observe that the absolute value of themaximal error is 2 . × − , and it is attained at the boundary. There are also errorsof magnitude ∼ − near the singularity curve, but everywhere else the errors are ofmagnitude ∼ − . We recall that f is piecewise-defined on Ω and Ω , while S ∗ ispiecewise-defined on ˜Ω and ˜Ω . In comparing f and ˜ f , we let S ∗ be piecewise-definedon Ω and Ω , with the same S ∗ and S ∗ . Similar results are obtained if we let f bepiecewise-defined on ˜Ω and ˜Ω , with the same f and f .14igure 10: The error in the second stage corrected approximation f − ˜ f . Consider the case of two disjoint singularity curves, subdividing the domain into threesub-domains, Ω , Ω , Ω , separated by two smooth curves Γ and Γ , and a piecewise-defined function as in Figure 11.Function values are given on a uniform grid with h = 0 .
01. In Figure 12 we displaythe extended data, padded with zeros arround the square.Function values are given on a uniform grid with h = 0 .
01. In Figure 12 we displaythe extended data, padded with zeros arround the square.
As in the first example, we assume that h is small enough to assure the detection ofthe singularity interval along each horizontal and vertical line in [0 , ([4]). For eachhorizontal line y = jh and each vertical line x = ih we detect the intervals containing asingular point, and collect all the midpoints of these intervals, denoting this set of pointsas Q . Within the detection algorithm we also include a proper ordering algorithm bywhich we obtain a subdivision of the set of data points P = { ( ih, jh ) } Ni,j =0 into threedisjoint sets, P k ∈ Ω k , k = 1 , , P denotes the set which has neighbors in the twoother sets, to be denoted as P and P .As in the case of one singularity curve, we construct a tensor product cubic spline15 :f(x,y)=x1:f(x,y)=sin(2(y-x)) 3:f(x,y)=x+cos(5(x+y)) Figure 11: The test function for Example 3.4.Figure 12: The test function for Example 3.4. D ( x, y ), whose zero level curves approximate Γ and Γ . To construct D we overlay on[0 , a net of 11 ×
11 points, Q , Q ⊂ P . To each point in Q we assign the value of itsdistance from the set Q , with a plus sign for the points which are in P , and for thepoints in P or P . To each point in Q we assign the value zero. The spline function D is now defined by the least-squares approximation to the values at all the points Q ∪ Q displayed in Figure 13. We have used here tensor product cubic spline on a uniform16 Figure 13: The points Q ∪ Q used for approximating the singularity curves.mesh with knots’ distance d = 0 .
2. We denote the zero level curves of the resulting D as ˜Γ , ˜Γ , and these curves are depicted in yellow in Figure 14. These curves providea good approximation to Γ and Γ . To improve the approximation quality we assignhigher weight ( × Q in the least-squares cost function defining D . The singularity curves (blue) and the approximating curves (yellow)
Figure 14: The singularity curves for Example 3.4, in blue, and their approximations,in yellow.Considering the test function defined in Figure 11, discretized using mesh size h =17 .
01 padded by ten layers of zero values all arround, its ∆ h signature is displayed inFigure 15 below.Figure 15: The signature σ ( f ) of the test function with two singularity curves.For the first stage approximation process we look here for an approximation S to f which is a combination of three bivariate splines components: S ( x, y ) = N d X i =1 N d X j =1 a ij B ij ( x, y ) , ( x, y ) ∈ ˜Ω , (3.16) S ( x, y ) = N d X i =1 N d X j =1 a ij B ij ( x, y ) , ( x, y ) ∈ ˜Ω , (3.17) S ( x, y ) = N d X i =1 N d X j =1 a ij B ij ( x, y ) , ( x, y ) ∈ ˜Ω , (3.18)where ˜Ω , ˜Ω , ˜Ω are the three disjoint domains defined by the bicubic spline D . Definition 3.4. The signature of S - σ ( S ) As in Definition 3.2 above, we define ¯ S as the matrix of values of S at the grid points, padded by two layers of zero values allarround, and compute its signature σ ( S ) = ∆ h ( ¯ S ) . Having defined the signature of f and of S , we now look for an approximation S such that the signatures of f and S are matched in the least-squares sense: (cid:2) { a ij } N d i,j =1 , { a ij } N d i,j =1 , { a ij } N d i,j =1 (cid:3) = arg min k σ ( f ) − σ ( S ) k . (3.19)18 .4.2 The induced system of equations For r = 1 , ,
3, the signature σ ( ¯ B rij ) include the matrix of ∆ h values of B ij restrictedto P r . We rearrange the signatures { σ ( ¯ B rij ) } N d i,j =1 , r = 1 , ,
3, as a vector σ ( ¯ B ) of length3 N d of signatures.Rearranging the unknwons { a rij } N d i,j =1 , r = 1 , ,
3, as a vector a of length 3 N d , thelinear system of normal equations for the spline coefficients is Aa = b , where A k,ℓ = h σ ( ¯ B ) k , σ ( ¯ B ) ℓ i , ≤ k, ℓ ≤ N d , (3.20)and b k = h σ ( ¯ B ) k , σ ( f ) i , ≤ k ≤ N d . (3.21)The above framework is applied to the numerical example with the test functiondefined in Figure 11, with h = 0 .
01 and sixth-order tensor product splines with knotsdistance d = 0 .
1. Solving the above linear system of equations for the spline coefficientsgives us the first stage approximation S ∗ , combined of two segments, S ∗ on ˜Ω , S ∗ on˜Ω and S ∗ on ˜Ω .In Figure 16 we show the error e = f − S ∗ at the initial grid points. We observe that e is quite smooth, which means that the piecewise spline approximation S ∗ has capturedwell the singularity structure of f , both across the singularity curves and along theboundaries. Figure 16: The error in the first stage approximation to f .19 .5 Second stage - corrected approximation Here also the error e = f − S ∗ , evaluated at the data points { ( ih, jh ) } Ni,j =0 , formsa ‘smooth’ data sequence, and we apply smooth approximation procedure to the data { e ( ih, jh ) } Ni,j =0 to obtain an approximation ˜ e ( x ). The final corrected approximation isdefined as ˜ f ≡ S ∗ + ˜ e. (3.22)Computing ˜ e using a fifth order tensor product spline interpolation to e , we derive thefinal corrected approximation to f , shown in Figure 17.Figure 17: The final corrected approximation to f .Figure 18 depicts the values of the error in the final corrected approximation ˜ f ,evaluated on a fine grid. We observe that the absolute value of the maximal error is2 × − , and it is attained at a singularity curve. There are also errors of magnitude ∼ − near the boundaries, but everywhere else the errors are of magnitude ∼ − .20igure 18: The final corrected approximation errors. Let f be a piecewise smooth function on [0 , , defined by the combination of two pieces f ∈ C m [Ω ] and f ∈ C m [Ω ], Ω = [0 , \ Ω . We assume that we are given functionvalues { f ijk ≡ f ( ih, jh, kh ) } Ni,j,k =0 , h = 1 / ( N − , with zero values. We do not know the position of the dividing surface separatingΩ and Ω . We denote this surface by Θ, and we assume that it is a C m -smoothsurface. As in the 1-D case, the existence of a singularity in [0 , significantly influencesstandard approximation procedures, especially near Θ, and the approximation poweralso deteriorates near the boundaries. As in the univariate case, the approximationprocedure described below is based upon matching a signature of the function and theapproximant. The computation algorithm involves finding approximations to f and f simultaneously, followed by a correction step.The first step of approximating Θ is a straightforward extension of the 2-D procedurefor approximating singularity curves.Consider the piecewise smooth function f ( x, y ) on Ω = [0 , with a jump singularityacross the surface Θ which is the surface of the closed 4-ball B of radius 0 .
33 centeredat (0 . , . , . B = { ( x, y, z ) | ( x − . + ( y − . + ( z − . ≤ . } , ( x, y, z ) = ( ( exp (( x + y ) / − sin (2 x ) , ( x, y, z ) / ∈ B,sin (4( x + y + z )) sin (2( x − y )) , ( x, y, z ) ∈ B. (4.1)Assume we are given function values { f ijk = f ( ih, jh, kh ) , ( ih, jh, kh ) ∈ Ω } , h =1 / ( N − m layers around Ω . Across-section of the extended data of the test function (4.1), for z = 0 . N = 41, isshown in Figure 19.Figure 19: A cross-section of the 3D test function.Let us denote by Q all the points { ( s j,k , jh, kh ) } , { ( ih, t i,k , kh ) } , and { ( ih, jh, r i,j ) } found as described above for the 2D case, i.e., the midpoints of intervals containing asingularity along lines parallel to the axes. We use these points to construct a tensorproduct cubic spline D ( x, y, z ), whose zero level surface defines the approximation to Θ.To construct D we first overlay on [0 , a net of m × m points, Q . These are the redpoints displayed in Figure 20, for m = 11, together with the points Q , for h = 0 . Q we assign the value of its distance from the set Q , with a plussign for the points which are in B , and a minus sign for the points outside B . To eachpoint in Q we assign the value zero. The spline function D is now defined by theleast-squares approximation to the values at all the points Q ∪ Q . We have used heretensor product cubic spline on a uniform mesh with knots’ distance d = 0 .
2. We denotethe zero level surface of the resulting D as Θ , and cross-section curves of this surfaceare depicted in yellow in Figure 21, together with the exact relevant curves of Θ in blue.22igure 20: The points Q (red) and the points Q (blue). Figure 21: Cross-sections of the approximation of the singularity surface Θ (yellow), at z = 0 .
25 and z = 0 . f defined by fourth-order forward differences of23he extended data in three directions. The first stage approximation is computed bymatching the signature of f and the signature of the approximant S defined by two setsof tensor product Tchebyshev polynomials up to degree n . One for the approximationover Ω = B , and the other for approximation over Ω = [0 , \ B . In 3-D numericaltests we are limited by the memory constraints of Matlab. We took h = 0 .
025 and tensorproduct polynomials of degree 8. The error in the resulting approximation is displayedin Figure 22. Figure 22: First stage approximation error, at z = 0 . Remark 4.1. 3D analysis
Let the signature σ m of the function data be defined by its m th order differences alongthe 3 axes. Adapting the discussion in Section 2.3 to the 3D case, it follows that for theapproximant S ∗ minimizing k σ m ( f ) − σ m ( S ) k , it follows that k σ m ( f − S ∗ ) k ∞ = k σ m ( f ) − σ m ( S ∗ ) k ∞ ≤ C N h m . (4.2)Using the above result, it follows that the error e = f − S ∗ , evaluated at the datapoints { ( ih, jh, kh ) } Ni,j,k =0 , forms a ‘smooth’ data sequence, and we can apply a smoothapproximation procedure to the data { e ( ih, jh, kh ) } Ni,j,k =0 to obtain an approximation˜ e ( x ). The final corrected approximation is defined as˜ f ≡ S ∗ + ˜ e. (4.3)24igure 23: The final corrected approximation on a fine mesh, at z = 0 . e using cubic tensor product spline interpolation to e , we derive the finalcorrected approximation to f , shown in Figure 23.Figure 24 depicts the values of the error in the final corrected approximation ˜ f ,evaluated on a fine grid. We observe that the absolute value of the maximal error is6 . × − , and it is attained at a singularity surface. There are also errors of magnitude ∼ × − near the boundaries, and everywhere else the errors are of magnitude ∼ − .25igure 24: The final corrected approximation errors, at z = 0 . References [1] Amat S., Levin D., Ruiz J., 2020,
Corrected subdivision approximation of piecewisesmooth functions . ArXiv.[2] Amat S., Ruiz J., Trillo J.C., 2019,
On an algorithm to adapt spline approximationsto the presence of discontinuities . Numer. Algorithms. 80.3, 903-936.[3] Amir A., Levin D., 2018,
High order approximation to non-smooth multivariatefunctions . Comput. Aided Geom. Des. 63, 31-65.[4] Ar`andiga F., Cohen A., Donat R., Dyn N., 2005,
Interpolation and approximationof piecewise smooth functions . SIAM J. Numer. Anal. 43.1, 41-57.[5] Gottlieb D., Shu C.W., 1977,
On the Gibbs phenomenon and its resolution . SIAMRev., 39, 644-668.[6] Gottlieb S., Jung J.H., Kim S., 2011,
A review of David Gottlieb’s work on theresolution of the Gibbs phenomenon . Commun. Comput. Phys. 9.3, 497-519.[7] Moler C. B., 1967,
Iterative refinement in floating point . Journal ACM, 14 (2),316-321.[8] Tadmor E., 2007,
Filters, mollifiers and the computation of the Gibbs phenomenon .Acta Numer., 16, 305-378. 269] Wilkinson J.H.,
Rounding Errors in Algebraic Processes . Prentice-Hall, EnglewoodCliffs, N. J., 1963.
Appendix
From k th difference to lower order differences Lemma 4.2.
Let h = N +1 , g ( ih ) = 0 , i < , and ∆ k g ( ih ) = O ( h α ) , − k ≤ i ≤ N − k. Then, ∆ k − ℓ g ( ih ) = O ( h α − ℓ ) , − k ≤ i ≤ N − k + ℓ. Proof.
It is enough to prove for one step, say from ∆ to ∆ .From ∆ g ( ih ) = g (( i + 1) h ) − g ( ih ) we have g (( i + 1) h ) = ∆ g ( ih ) + g ( ih ) . Assuming g ( − h ) = 0 it follows that g ( jh ) = j − X i = − ∆ g ( ih ) . Assuming ∆ g ( ih ) = O ( h α ) , − ≤ i ≤ N − , we obtain | g ( jh ) | ≤ jCh α , ≤ j ≤ N, implying g ( jh ) = O ( h α − ) ..