Galerkin Method with Trigonometric Basis on Stable Numerical Differentiation
aa r X i v : . [ m a t h . NA ] O c t Galerkin Method with Trigonometric Basis onStable Numerical Differentiation
Yidong Luo
School of Mathematics and Statistics, Wuhan University, Hubei Province, P. R. ChinaE-mail:
August 2017
Abstract.
This paper considers the p ( p = 1 , ,
3) order numerical differentiationon function y in (0 , π ). They are transformed into corresponding Fredholm integralequation of the first kind. Computational schemes with analytic solution formulas aredesigned using Galerkin method on trigonometric basis. Convergence and divergenceare all analysed in Corollaries 5.1, 5.2, and a-priori error estimate is uniformly obtainedin Theorem 6.1, 7.1, 7.2. Therefore, the algorithm achieves the optimal convergencerate O ( δ µ µ +1 ) ( µ = or 1) with periodic Sobolev source condition of order 2 µp .Besides, we indicate a noise-independent a-priori parameter choice when the function y possesses the form of p − X k =0 a k t k + N X k =1 b k cos kt + N X k =1 c k sin kt, b N , c N = 0 , In particular, in numerical differentiations for functions above, good filtering effect(error approaches 0) is displayed with corresponding parameter choice. In addition,several numerical examples are given to show that even derivatives with discontinuitycan be recovered well.
1. Introduction
Numerical differentiation is a classical ill-posed problem which arises in differentpractical fields, such as option pricing, thermodynamics and photoelectric response (Seee.g. [4,11,13-16,25]). In process of numerical differentiation on a given function y ( x )of specific smoothness, always there would interfuse with a noise δy in measurementor calculations. For this sake, it is routine to do numerical differentiation on the noisyfunction y δ := y + δy , where the high frequency part in δy would bring uncontrolledhuge error when computing with traditional numerical algorithms. In order to overcomethe difficulties of ill-posedness, several kinds of regularization method were introduced.Tikhonov method (See [8,11,12,17,26-28]) is a classical regularization method fornumerical differentiation of first order. It is generally modeled that, for x ∈ [0 ,
1] witha grid ∆ = { x < x < · · · < x n = 1 } and h = max i x i +1 − x i being its mesh size,given finite noisy samples ˜ y i of y ( x i ) such that | ˜ y i − y ( x i ) | ≤ δ . Assume that ˜ y , ˜ y n are alerkin Method with Trigonometric Basis on Stable Numerical Differentiation y = y (0) , ˜ y n = y (1). Then minimizing the costfunctional Ψ[ f ] := 1 n − n − X i =1 (˜ y i − f ( x i )) + α k f ′′ k in { f ∈ H (0 ,
1) : f (0) = y (0) , f (1) = y (0) } gives minimizer f α . Afterwarddifferentiating this minimizer gives f ′ α as the regularized solution to the exact derivative y ′ with appropriate parameter choice α = α ( δ ). Further results illustrate that f ′ α ( δ ) can converge to y ′ with best rate O ( √ δ ) with parameter choice for α = δ (See [28]).However, we note that the penalty term k f ′′ k in cost functional basically demandthat the all candidate solutions f ′ α must be at least H smooth and further resultin [27,28] illustrates that, for y ∈ C [0 , /H (0 , α = δ , the upper bound for k f ′ δ − y ′ k must tend to infinity as δ, h →
0. Thus thisalgorithm naturally deny to recover derivative with regularity less than H , especiallydiscontinuous derivative.Difference method [4,23] is another classical regularization method for numericaldifferentiation (including higher orders). It constructs difference quotient of p order asregularized solution to exact derivatives y ( p ) with the stepsize h being the regularizationparameter. The convergence of this scheme is established in L ∞ setting and will basicallydemand that y ′ ∈ C ,α , α > O ( δ ) and O ( δ ) for first and second order numerical differentiation are derived with h = O ( δ )respectively. But we need note the essential flaw in this algorithm that the numericalderivatives constructed by this algorithm will lose its smoothness and all be piecewiseconstant, whether the original function is smooth or not.In this paper, we first formulate the p order derivative y ( p ) as the unique solutionof Fredholm integral equation of the first kind A ( p ) ϕ := 1( p − Z x ( x − t ) p − ϕ ( t ) dt = y ( x ) , x ∈ (0 , π ) . (1.1)where A ( p ) : L (0 , π ) → L (0 , π ). For the simple design of computational scheme,we apply Galerkin method with trigonometric basis to above equation to construct aregularized solutions to y ( p ) (refer to [9,19-22] for similar techniques). The basic settingcan be described as below:Assume that y δ , y, δy ∈ L (0 , π ) and k δy k L ≤ δ, where δ is noise level. Given a projection sequence { P n } which project L (0 , π ) ontosubspace X n := span { √ π , cos t √ π , sin t √ π , · · · , cos nt √ π , sin nt √ π } , we discrete (1.1) into a finite-rank approximation system A ( p ) n ϕ n = y n ϕ n ∈ X n , y n := P n y ∈ X n , (1.2) alerkin Method with Trigonometric Basis on Stable Numerical Differentiation A ( p ) n := P n A ( p ) P n : X n −→ X n .Now solving (1.2) in sense of Moore-Penrose inverse gives A ( p ) n † P n y . This is a naturalapproximation scheme for y ( p ) , where † denotes the Moore-Penrose inverse of linearoperator. Considering the noise δy , above scheme should be adjusted to a regularizedscheme A ( p ) n † P n y δ with n ( p ) = n ( p ) ( δ ) , (1.3)where n ( p ) := n ( p ) ( δ ) is the regularization parameter choice such that n ( p ) := n ( p ) ( δ ) → + ∞ , δ → + and k A ( p ) n ( p ) ( δ ) † P n ( p ) ( δ ) y δ − y ( p ) k L → , δ → + . Here notice that n ( p ) := n ( p ) ( δ ) ( p = 1 , ,
3) stands for parameter choice strategy of thefirst three order numerical differentiation respectively. Throughout this paper, withoutspecial indication, we follow this notation and p ∈ , , • For y ∈ H p (0 , π ) (this restriction on initial value data is removable), where H p (0 , π ) := { y ∈ H p (0 , π ) : y (0) = · · · = y ( p − (0) = 0 } . a priori error estimate isobtained uniformly for first three order numerical differentiation as k A ( p ) n † P n y δ − y ( p ) k L (0 , π ) ≤ C ( p ) n p δ + ( γ ( p ) + 1) k ( I − P n ) y ( p ) k L , This determines the parameter choice strategy: n ( p ) = n ( p ) ( δ ) = κδ a − p , where a ∈ (0 , p ) is optional and κ is a constant which depends on the concrete formof k ( I − P n ) y ( p ) k L . This establish a convergence result for numerical differentiationof first three order when y ( p ) ∈ L , especially for derivative with discontinuities.However, we need specify that, when recovering y ( p ) ∈ L are only continuous anddiscontinuous with no periodic smoothness, the constant κ is unknown and needto test in experiments (See section 8.3). In addition we give a notice that, whetherthe derivative is smooth or not, its approximation by above algorithm will be realanalytic since it is a trigonometric polynomial. • Supplemented with a priori information y ( p ) ∈ H lper (0 , π ) , l > k A ( p ) n † P n y δ − y ( p ) k L (0 , π ) ≤ C ( p ) n p δ + ( γ ( p ) + 1) 1 n l k y ( p ) k H lper , where C ( p ) , γ ( p ) are all independent constants given in proceeding sections. Andoptimal convergence rate O ( δ µ µ +1 ) can be derived under periodic Sobolev sourcecondition y ( p ) ∈ H µpper (0 , π ) µ = 12 or 1 , alerkin Method with Trigonometric Basis on Stable Numerical Differentiation n ( p ) = λ ( p ) δ − µ +1) p , where λ ( p ) is a constant only dependson exact derivative y ( p ) and can be given explicitly in preceding sections 6,7. Inparticular, when y ( p ) = a + N X k =1 b k cos kx + N X k =1 c k sin kx ∈ H ∞ per (0 , π )the optimal parameter choice will degenerate to a constant n = max( N , N )which does not depend on noise. Furthermore, the numerical study in section 8.1demonstrates the good filtering effect (error approaches 0) occurs in this specificcase. • In a more general setting for p order numerical differentiation when y ∈ H p (0 , π ),it is indicated in Corollary 6.2 that, when y ∈ H p (0 , π ) \ H p (0 , π ), k A ( p ) n † P n y k L −→ ∞ ( n → ∞ ) . Now any parameter choice n ( p ) ( δ ) such that n ( p ) = n ( p ) ( δ ) → ∞ ( δ → + ) may notbe a proper regularization parameter since we can not determine k A ( p ) n ( p ) ( δ ) † P n ( p ) ( δ ) y δ − y ( p ) k L → , δ → + . through traditional estimate any more (See second point behind Corollary 5.2).In order to recover the regularization effect of algorithm, we introduce Taylorpolynomial truncation of p − y = y ( x ) − p − X k =0 y ( k ) (0) k ! x k ∈ H p (0 , π ) , to replace y ∈ H p (0 , π ). In this way, the regularization effect can be well recovered(See section 6,7)with exact measurements on initial value data. Furthermore, wetake possible noise in measurements in initial value data into consideration, andthis effectively relax the requirement on precision of initial value data. Outline of Paper : In section 2, we introduce some tools and basic lemmas. In section 3,we illustrate general framework, give the main idea on how to utilize the noisy data y δ torecover the p order derivatives y ( p ) . In section 4, we give corresponding analytic solutionformula to Galerkin approximation system which determines the well-posedness resultand upper bound for noise error. In section 5, we propose an estimate on approximationerror when RHS y belongs to H p (0 , π ), and give the convergence and divergence resultswith respect to y ∈ H p (0 , π ) and y ∈ L (0 , π ) \ H p (0 , π ) respectively. In sections6 and 7, with periodic Sobolev source condition of order 2 µp , we construct a priorierror estimate and indicate the parameter choice strategy for optimal convergence rate O ( δ µ µ +1 ) when y ∈ H p (0 , π ) and y ∈ H p (0 , π ) \ H p (0 , π ) respectively. In section 8,we test some numerical examples to show the characteristics and effects of algorithmwhen derivatives are smooth and discontinuous respectively. In section 9, we concludethe main work of this paper. alerkin Method with Trigonometric Basis on Stable Numerical Differentiation
2. Preliminary and Basic Lemmas
Let
X, Y be Hilbert space, and A be bounded linear operator mapping from X to Y . D ( A ), N ( A ) and R ( A ) denote its domain, null space and range, respectively.For A : X → Y and y ∈ R ( A ) ⊕ R ( A ) ⊥ , the Moore-Penrose inverse x † := A † y isdefined as the element of smallest norm satisfying k Ax † − y k = inf {k Ax − y k| x ∈ X } . Thus A † : D ( A † ) := R ( A ) ⊕ R ( A ) ⊥ ⊆ Y −→ X defines a closed linear operator from Y to X .In the following, we indicate some useful properties of Moore-Penrose inverse A † : • If A : X → Y is one-to-one, then, for y ∈ R ( A ), A † y naturally degenerates into A − y . • If R ( A ) is closed, then D ( A † ) = R ( A ) ⊕ R ( A ) ⊥ = Y and by closed graph theorem, A † : Y → X is bounded. • If R ( A ) is closed, then AA † = P R ( A ) , A † A = P N ( A ) ⊥ . If R ( A ) is not necessarilyclosed, then the former identity need be adjusted into AA † y = P R ( A ) y, ∀ y ∈ R ( A ) ⊕ R ( A ) ⊥ . (2.1)For more comprehensive information on Moore-Penrose inverses, see [2, Chapter 9] or[6,7]. Throughout this paper, we only discuss on Sobolev space over R. Without specification,we denote H p (0 , π ) := H p R (0 , π ). Here we introduce all kinds of notations of Sobolevspaces which will be used in the context. For more information, see [1,5] and [3,Appendix 4]. For some positive integer p , the Sobolev space H p (0 , π ) is defined as H p (0 , π ) := { y ∈ L (0 , π ) : D y, · · · , D p y ∈ L (0 , π ) } , (2.2)where D k y means weak derivative, defined as ζ ∈ L (0 , π ) which satisfies Z π ζ ϕdx = ( − k Z π yϕ ( k ) dx, ϕ ∈ C ∞ (0 , π ) . Equivalently, it can be characterized in absolute continuous form (refer to [3, Page 14])as H p (0 , π ) = U p [0 , π ] := { y ∈ C p − [0 , π ] :there exists Ψ ∈ L (0 , π ) such that y ( p − ( x ) = α + Z x Ψ( t ) dt, α ∈ R } (2.3) alerkin Method with Trigonometric Basis on Stable Numerical Differentiation y ∈ H p (0 , π ), we, by default,modify y ∈ H p (0 , π ) ( p ∈ N) in a set of measure zero such that it belongs to the latterfine function space U p [0 , π ].Besides, for p ∈ N, we define H pz (0 , π ) := { y ∈ H p (0 , π ) : y ( z ) = · · · = y ( p − ( z ) = 0 } , z = 0 or 2 π and ˙ H p (0 , π ) := { y ∈ H p (0 , π ) : y (2 π ) = · · · = y ( p − (2 π ) = y ( p ) (0) = · · · = y (2 p − (0) = 0 } . For real number s >
0, periodic Sobolevspaces of fractional order H sper (0 , π ) is defined in trigonometric form as H sper (0 , π ) := { ϕ ∈ L (0 , π ) : ξ + ∞ X k =1 (1 + k ) s ( ξ k + η k ) < ∞} , where ξ = 1 √ π Z π ϕ ( t ) dt, ξ k = 1 √ π Z π ϕ ( t ) cos ktdt, η k = 1 √ π Z π ϕ ( t ) sin ktdt. Supplementing another element ψ ∈ H sper (0 , π ), its inner product is rephrased as( ϕ, ψ ) H sper = ξ ζ + ∞ X k =1 (1 + k ) s ( ξ k ζ k + η k λ k )with ζ = 1 √ π Z π ψ ( t ) dt, ζ k = 1 √ π Z π ψ ( t ) cos ktdt, λ k = 1 √ π Z π ψ ( t ) sin ktdt. In addition, we define H ∞ per (0 , π ) := \ s> H sper (0 , π ) . p order Define integro-differential operator of integer order p as: A ( p ) : L (0 , π ) −→ L (0 , π ) ϕ ( A ( p ) ϕ )( x ) := 1Γ( p ) Z x ( x − t ) p − ϕ ( t ) dt, x ∈ (0 , π ) . (2.4)This is a compact linear operator with infinite-dimensional range, which satisfies Lemma 2.1 H p (0 , π ) = R ( A ( p ) ) , p = 1 , , . alerkin Method with Trigonometric Basis on Stable Numerical Differentiation Proof 1 ” ⊆ ”: Assume that y ∈ H p (0 , π ) ( p = 1 , , . There exists a y ⋆ ∈ U p [0 , π ] as a modification of y in a possible measure set with initial value data, that is, y ⋆ (0) = · · · = y ⋆ ( p − (0) = 0 . With integration formula by parts, it is not difficult toverify that, p − Z x ( x − t ) p − y ⋆ ( p ) ( t ) dt = y ⋆ ( x ) = y ( x ) , a.e.. Thus, H p (0 , π ) ⊆ R ( A ( p ) ) , p = 1 , , . ” ⊇ ”: For simplicity, we only provide proof of case p = 2 . Assuming y ∈ R ( A (2) ) , thenthere exists a ϕ ∈ L (0 , π ) such that Z x ( x − t ) ϕ ( t ) dt = y ( x ) , a.e.. It is not difficult to verify that D y = Z x ϕ ( t ) dt, D y = ϕ ( t ) a.e. With definition of (2.2), it yields that y ∈ H (0 , π ) . Then by absolute continuouscharacterization (2.3) of Sobolev function of one variable, there exist a y ⋆ ∈ U [0 , π ] as modification of y in a measure set. Thus we have y ⋆ = Z x ( x − t ) ϕ ( t ) dt, y ⋆ ′ = D y ⋆ = D y = Z x ϕ ( t ) dt, a.e. Notice that y ⋆ , y ⋆ ′ , Z x ( x − t ) ϕ ( t ) dt, Z x ϕ ( t ) dt are all continuous functions, thus y ⋆ = Z x ( x − t ) ϕ ( t ) dt, y ⋆ ′ = Z x ϕ ( t ) dt. ( strictly ) ” ⊇ ” holds for case p = 2 . With above equality, we describe the density of range in L (0 , π ). Lemma 2.2 R ( A ( p ) ) = L (0 , π ) , P R ( A ( p ) ) = I ( p = 1 , , , where I is the identity operator on L (0 , π ) . Proof 2
With Lemma 2.1, H p (0 , π ) = R ( A ( p ) ) . Recall the fact that C ∞ (0 , π ) is densein L (0 , π ) and notice that C ∞ (0 , π ) ⊆ H p (0 , π ) , then L (0 , π ) = C ∞ (0 , π ) ⊆H p (0 , π ) ⊆ L (0 , π ) . The result follows.
This implies that R ( A ( p ) ) ⊥ = 0, and D ( A ( p ) † ) = R ( A ( p ) ) ⊕ R ( A ( p ) ) ⊥ = R ( A ( p ) ) = H p (0 , π ) . Now differentiating the both sides of equation (1.1) for y ∈ H p (0 , π ) in p order yieldsthat A ( p ) † y = A ( p ) − y = y ( p ) . This gives Lemma 2.3 A ( p ) † y = y ( p ) , ∀ y ∈ H p (0 , π ) . alerkin Method with Trigonometric Basis on Stable Numerical Differentiation Let X be Hilbert space. For the linear operator equation Aϕ = y, where A : X −→ X is bounded and linear. To approximate ϕ † := A † y ∈ X, We introduce a sequence of finite-dimensional subspaces { X n } , which satisfies X n ⊆ X n +1 , ∞ [ n =1 X n = X. Then construct a sequence of orthogonal projections { P n } , where P n projects X onto X n , and gives Galerkin approximation setting A n ϕ n = y n , y n := P n y ∈ X n , (2.5)where A n := P n AP n : X n −→ X n . Hence solving (2.5) in sense of Moore-Penrose inversegives Galerkin projection scheme ϕ † n := A † n y n ∈ X n , (2.6)where A † n : R ( A n ) + R ( A n ) ⊥ n = X n −→ X n . Notice that ⊥ n means orthogonalcomplement in finite dimensional Hilbert space X n .Now { ϕ † n } is a natural approximate scheme for ϕ † . To study its convergenceproperty, we introduce the Groetsch regularizer for setting (2.5) as R n := A † n P n A : X −→ X n ⊆ X, define the Groetsch regularity as sup n k R n k < + ∞ , and introduce the following result: Lemma 2.4
For above Galerkin approximate setting (2.5), if Groetsch regularity holds,then(a) For y ∈ D ( A † ) = R ( A ) + R ( A ) ⊥ . k A † n P n P R ( A ) y − A † y k ≤ k P N ( A n ) A † y k + k R n − I X kk ( I − P n ) A † y k , (2.7) (b) For y / ∈ D ( A † ) , lim n →∞ k A † n P n P R ( A ) y k = ∞ . Proof 3 see[18, theorem 2.2]2.5. Higher order estimate under trigonometric basis
To further estimate the right term k ( I − P n ) A † y k under trigonometric basis in L , similarto the result [3, Lemma A.43], we introduce another error estimate: alerkin Method with Trigonometric Basis on Stable Numerical Differentiation Lemma 2.5
Let P n : L (0 , π ) −→ X n ⊂ L (0 , π ) be an orthogonal projectionoperator, where X n := { ξ · √ π + n X k =1 ξ k cos kt √ π + n X k =1 η k sin kt √ π : ξ , ξ k , η k ∈ R } . Then P n is given as follows ( P n x )( t ) = ξ √ π + n X k =1 ξ k cos kt √ π + n X k =1 η k sin kt √ π , where ξ = 1 √ π Z π x ( t ) dt, ξ k = Z π x ( t ) cos kt √ π dt,η k = Z π x ( t ) sin kt √ π dt, ≤ k ≤ n are the Fourier coefficients of x . Furthermore, the following estimate holds: k x − P n x k L ≤ n r k x k H rper f or all x ∈ H rper (0 , π ) , where r ≥ .
3. General Framework
We start from
Problem 3.1
Assume that we have y ∈ H p (0 , π ) and y δ measured on (0 , π ) , belongingto L (0 , π ) such that k y δ − y k L ≤ δ . How to get a stable approximation to y ( p ) ? In Lemma 2.3, we have known that y ( p ) is the solution of linear operator equation A ( p ) ϕ := 1Γ( p ) Z x ( x − t ) p − ϕ ( t ) dt = y ( x ) , x ∈ (0 , π ) , (3.1)when y ∈ H p (0 , π ) := { y ∈ H p (0 , π ) : y (0) = · · · = y ( p − (0) = 0 } . In the following, we consider to approximate y ( p ) by the Galerkin method. Set A ( p ) : X = L (0 , π ) −→ L (0 , π ) . Choose a sequence of orthogonal projection operators { P n } , where P n projects L (0 , π )onto X n := span { √ π , cos t √ π , sin t √ π , · · · , cos nt √ π , sin nt √ π } . Then degenerate the original operator equation with noisy data A ( p ) ϕ = y δ alerkin Method with Trigonometric Basis on Stable Numerical Differentiation A ( p ) n ϕ n = y δn , (3.2)where A ( p ) n := P n A ( p ) P n : X n −→ X n , y δn := P n y δ . (3.3)Span A ( p ) n under above basis, then the finite-rank system (3.2) is transformed into thelinear system as M ( p ) n u n = b δn , u n , b δn ∈ R n +1 . (3.4)Notice that M ( p ) n and b δn are defined as follows: M ( p ) n := ( m ( p ) ij ) (2 n +1) × (2 n +1) where m ( p ) ij := ( A ( p ) n ( ξ j ) , ξ i ) L , i, j ∈ , , , · · · , n − , nξ := 1 √ π , ξ k − := cos kx √ π , ξ k := sin kx √ π , k ∈ , , · · · , n. Indeed, A ( p ) n ( 1 √ π , cos t √ π , sin t √ π , · · · , cos nt √ π , sin nt √ π )= ( 1 √ π , cos t √ π , sin t √ π , · · · , cos nt √ π , sin nt √ π ) M ( p ) n . (3.5)And b δn := ( f , f , g , · · · , f n , g n ) T is defined as f := Z π y δ ( t ) 1 √ π dtf k := Z π y δ ( t ) cos kt √ π dt, g k := Z π y δ ( t ) sin kt √ π dt, k ∈ , , · · · , n. Indeed, y δn = ( 1 √ π , cos t √ π , sin t √ π , · · · , cos nt √ π , sin nt √ π ) b δn . Once we figure out u p,δn = M ( p ) n † b δn , then we obtain solution for (3.2), ϕ p,δn = ( 1 √ π , cos t √ π , sin t √ π , · · · , cos nt √ π , sin nt √ π ) u p,δn . in sense of Moore-Penrose inverse, ϕ p,δn = A ( p ) n † y δn . This is the regularized scheme. Inthe following, we need to determine a regularization parameter n ( p ) = n ( p ) ( δ ) such that ϕ p,δn ( p ) ( δ ) := A ( p ) n ( p ) ( δ ) † y δn ( p ) ( δ ) s −→ y ( p ) , δ → + . alerkin Method with Trigonometric Basis on Stable Numerical Differentiation Now, in order to control the accuracy of computation, we adjust parameter choicestrategy n ( p ) = n ( p ) ( δ ) according to following total error estimate k ϕ p,δn − y ( p ) k L := k A ( p ) n † P n y δ − y ( p ) k L . (3.6)Since Lemma 2.3 illustrates that y ( p ) = A ( p ) † y, y ∈ H p (0 , π ) , (3.7)inserting (3.7) into (3.6), the formula (3.6) becomes k A ( p ) n † P n y δ − A ( p ) † y k L , y ∈ H p (0 , π ) . Throughout this paper we use the following definitions • Total error e ( p ) T := k A ( p ) n † P n y δ − A ( p ) † y k L , which is broken into two parts (c.f.[10, Chapter 1.1]): • Noise error: e ( p ) N := k A ( p ) n † P n y δ − A ( p ) n † P n y k L • Approximation error: e ( p ) A := k A ( p ) n † P n y − A ( p ) † y k L It is an easy observation that e ( p ) T ≤ e ( p ) N + e ( p ) A . Upon this fact, we figure out the totalerror estimate by estimating e ( p ) N and e ( p ) A respectively.
4. Well-posedness and numerical scheme of Galerkin System
With concrete expressions of M ( p ) n in Appendix A, it is not difficult to obtain: Theorem 4.1
Finite dimensional system (3.4) is well-posed, that is, there exists aunique solution to (3.4), denoted as u p,δn = M ( p ) n − b δn , where b δn = ( f , f , g , · · · , f n , g n ) T , u p,δn = ( ξ ( p )0 , ξ ( p )1 , η ( p )1 , · · · , ξ ( p ) n , η ( p ) n ) T . Moreover, analytic formulas for the solution of Galerkin approximation system (3.2) aredetermined as follows: A ( p ) n † P n y = ξ ( p )0 √ π + n X k =1 ξ ( p ) k cos kt √ π + n X k =1 η ( p ) k sin kt √ π . Corresponding three cases are listed as follows.
Case p = 1 : ξ (1)0 = 1 π ( f + √ n X k =1 f k ) , (4.1) alerkin Method with Trigonometric Basis on Stable Numerical Differentiation ξ (1) k = √ ξ (1)0 + kg k , (4.2) η (1) k = − kf k . (4.3) Case p = 2 : ξ (2)0 = L − n π ( f + √ n X k =1 f k + √ π n + 1 n X k =1 kg k ) , (4.4) ξ (2) k = √ ξ (2)0 − k f k , (4.5) η (2) k = 2 k n + 1 n X k =1 kg k − k g k − √ kπ n + 1 ξ (2)0 , (4.6) where L n := 16 + 12 π S n −
14 2 n n + 1 , S n := n X k =1 k . Case p = 3 : ξ (3)0 = T − n π ( f + √ n X k =1 f k + √ π n + 1 n X k =1 kg k − F n n X k =1 k f k ) , (4.7) ξ (3) k = − k g k + 22 n + 1 k n X k =1 kg k − πk (2 n + 1) n X k =1 k f k + ε n,k ξ (3)0 (4.8) η (3) k = k f k − k n + 1 n X k =1 k f k − √ πk n + 1 ξ (3)0 , (4.9) where T n := 112 + 12 n + 1 1 π S n −
13 2 n n + 1 + n (2 n + 1) ,F n := 4 √ π n + 1 L n , K n := − π L n , ε n,k = √ k n + 1 K n ) . Remark 4.1
After we solve the Galerkin approximation system, we know that A ( p ) n : X n → X n is one-to-one and surjective. For the usage of the proceeding section, we claimthat N ( A ( p ) n ) = 0 , R ( A ( p ) ) = X n . Remark 4.2
With above analytic formulas, it is not difficult to figure out that k A ( p ) n † k X n → X n ≤ C ( p ) n p (4.10) where C (1) = √ , C (2) ≈ . , C (3) ≈ . . Here we specify that k · k X n isinduced by k · k L , that is, k x n k X n := k x n k L , ∀ x n ∈ X n . This give bound to the estimateof noise error.alerkin Method with Trigonometric Basis on Stable Numerical Differentiation
5. Estimate on Approximation Error and Instability result
We use Lemma 2.4 to analyse the convergence and divergence of Galerkin method. Thekey point is the estimate ofsup n k R ( p ) n k < + ∞ , where R ( p ) n := A ( p ) n † P n A ( p ) . (5.1)To gain an uniform upper bound for above formula, we first prepare two decay estimateof R ( p ) n ( cos jt √ π ) and R ( p ) n ( sin jt √ π )with respect to integer variable j : Lemma 5.1
For operators A ( p ) , A ( p ) n defined in (2.4),(3.3) respectively, set ( A ( p ) n † P n A ( p ) ( cos jt √ π )( t ) = α ( p )0 √ π + n X k =1 α ( p ) k cos kt √ π + n X k =1 β ( p ) k sin kt √ π . When j ≥ n + 1 , α ( p )0 = α ( p )0 ( n, j ) = (( A ( p ) n † P n A ( p ) ( cos jt √ π ) , √ π ) L ,α ( p ) k = α ( p ) k ( n, j ) = (( A ( p ) n † P n A ( p ) ( cos jt √ π ) , cos kt √ π ) L ,β ( p ) k = β ( p ) k ( n, j ) = (( A ( p ) n † P n A ( p ) ( cos jt √ π ) , sin kt √ π ) L , and | α ( p )0 | ≤ C ( p )1 j , | α ( p ) k | ≤ C ( p )2 j , | β ( p ) k | ≤ C ( p )3 j , ≤ k ≤ n. (5.2) where C (1)1 = 0 , C (2)1 = √ , C (3)1 = 11 √ ,C (1)2 = 0 , C (2)2 = 2 , C (3)2 = 23 ,C (1)3 = 0 , C (2)3 = π, C (3)3 = 11 π. When p = 3 , we need an extra condition n ≥ to maintain above estimate. Proof 4
Case p = 1 : When j ≥ n + 1 , substituting (B.1) into (4.1),(4.2) and (4.3),it follows that ( A (1) n † P n A (1) ( cos jt √ π ))( t ) = A (1) n † (0) = 0 . This gives lemma for case p = 1 . Case p = 2 : Inserting (B.2) into (4.4), it follows that α (2)0 = √ L − n π j . alerkin Method with Trigonometric Basis on Stable Numerical Differentiation Hence ≤ α (2)0 ≤ √ j ( by (C.1) ) . (5.3) Besides, inserting (B.2) into (4.5),(4.6) respectively, it yields that α (2) k = √ α (2)0 , β (2) k = − √ kπ n + 1 α (2)0 . Then, by (5.3), ≤ α (2) k ≤ j , − πj ≤ β (2) k ≤ . Case p = 3 : Inserting (B.3) into (4.7), it follows that α (3)0 = T − n π √ πj n + 1 . Notice Proposition C.1 (C.4), T n ∈ [ 1396 1 n n + 1 ,
340 1 n (2 n + 1) ] , n ≥ . Hence, ≤ α (3)0 ≤ √ j , where α (3)0 := α (3)0 ( n, j ) , and n ≥ . (5.4) Besides, insert (B.3) into (4.8), then it follows that α (3) k = 12 n + 1 2 k j + √ k n + 1 K n ) α (3)0 . (5.5) By Proposition C.1(C.2), it is routine to obtain that k n + 1 K n ∈ [ − , , k n + 1 K n ∈ [ − , . Hence, with (5.4), ≤ | α (3) k | ≤ j . Further, insert (B.3) into (4.9), and we have β (3) k = − √ πk n + 1 α (3)0 . Hence − πj ≤ β (3) k ≤ by (5.4) ) . Lemma 5.2
For operators A ( p ) , A ( p ) n defined in (2.4),(3.3) respectively, set ( A ( p ) n † P n A ( p ) ( sin jt √ π )( t ) = θ ( p )0 √ π + n X k =1 θ ( p ) k cos kt √ π + n X k =1 ω ( p ) k sin kt √ π . When j ≥ n + 1 , θ ( p )0 = θ ( p )0 ( n, j ) = (( A ( p ) n † P n A ( p ) ( sin jt √ π ) , √ π ) L , alerkin Method with Trigonometric Basis on Stable Numerical Differentiation θ ( p ) k = θ ( p ) k ( n, j ) = (( A ( p ) n † P n A ( p ) ( sin jt √ π ) , cos kt √ π ) L ,ω ( p ) k = ω ( p ) k ( n, j ) = (( A ( p ) n † P n A ( p ) ( sin jt √ π ) , sin kt √ π ) L , and | θ ( p )0 | ≤ C ( p )4 j , | θ ( p ) k | ≤ C ( p )5 j , | ω ( p ) k | ≤ C ( p )6 j , ≤ k ≤ n. (5.6) where C (1)4 = √ π , C (2)4 = 3 √ , C (3)4 = 44 √ ,C (1)5 = 2 π , C (2)5 = 3 , C (3)5 = 30 ,C (1)6 = 0 , C (2)6 = 5 , C (3)6 = 48 . Notice that when p = 3 , we need the extra condition n ≥ to maintain above estimate. Proof 5
Case p = 1 : When j ≥ n + 1 , insert (B.4) into (4.1),(4.2),(4.3), then ( A (1) n † P n A (1) ( sin jt √ π ))( t ) = A (1) n † ( √ j · √ π ) = √ πj · √ π + n X k =1 πj · cos kt √ π . This gives lemma for case p = 1 . Case p = 2 : Insert (B.5) into (4.4), and it follows that θ (2)0 = 12 n + 1 √ πj L − n . With Proposition C.1 (C.1), it follows that ≤ θ (2)0 ≤ √ j . (5.7) Besides, insert (B.5) into (4.5),(4.6), then θ (2) k = √ θ (2)0 , ω (2) k = 1 j k n + 1 − k n + 1 · √ πθ (2)0 . Then by (5.7) we have ≤ θ (2) k ≤ j , − j ≤ ω (2) k ≤ j . Case p = 3 : Insert (B.6) into (4.7), and it follows that θ (3)0 = T − n π j ( F n − √ j ) . Notice that it is easy to obtain that | F n − √ j | ≤ √ n (2 n + 1) from Proposition C.1 (C.3). In this way, with Proposition C.1 (C.4), when n ≥ , | θ (3)0 | = T − n π j | F n − √ j | ≤ n (2 n + 1) · π j · √ n (2 n + 1) alerkin Method with Trigonometric Basis on Stable Numerical Differentiation ≤ √
227 1 j = 44 √
23 1 j . (5.8)
Besides, insert (B.6) into (4.8), and we have θ (3) k = 1(2 n + 1) πk j + √ k n + 1 K n ) θ (3)0 . Hence, by (5.8) | θ (3) k | ≤ k (2 n + 1) π j + √ θ (3)0 ≤ j + √ θ (3)0 ≤ j + √ · √
23 1 j = 30 j . Further, insert (B.6) into (4.9), and it follows that ω (3) k = 2 k n + 1 1 j − k n + 1 √ πθ (3)0 . Hence, by (5.8) | ω (3) k | ≤ j + √ π | θ (3)0 | ≤ j + √ π √
23 1 j ≤ j . Lemma 5.3
Set A ( p ) , A ( p ) n defined in (2.4), (3.3) respectively. Then k K ( p ) n k L → L ≤ κ ( p ) , ∀ n ∈ N where K ( p ) n := A ( p ) n † P n A ( p ) ( I − P n ) : L (0 , π ) −→ X n ⊆ L (0 , π ) Remark 5.1
With direct computations, we can obtain that κ (1) ≈ . , κ (2) ≈ . , κ (3) ≈ . . Proof 6
Set v = a √ π + P ∞ k =1 a k cos kt √ π + P ∞ k =1 b k sin kt √ π such that k v k L = 1 , that is, a + P ∞ k =1 a k + P ∞ k =1 b k = 1 . We consider the estimate on k K ( p ) n v k L , n ∈ N .Since A ( p ) n † P n A ( p ) ( n ∈ N) is continuous with Remark 4.2, K ( p ) n v = A ( p ) n † P n A ( p ) ( I − P n ) v = A ( p ) n † P n A ( p ) ( ∞ X j = n +1 a j cos jt √ π + ∞ X j = n +1 b j sin jt √ π )= ∞ X j = n +1 a j A ( p ) n † P n A ( p ) ( cos jt √ π ) + ∞ X j = n +1 b j A ( p ) n † P n A ( p ) ( sin jt √ π ) . Recall Lemma 5.1 and Lemma 5.2. It follows that A ( p ) n † P n A ( p ) ( I − P n ) v = H √ π + n X k =1 H k cos kt √ π + n X k =1 G k sin kt √ π , (5.9) where H = ∞ X j = n +1 ( a j α ( p )0 ( n, j ) + b j θ ( p )0 ( n, j )) , alerkin Method with Trigonometric Basis on Stable Numerical Differentiation H k = ∞ X j = n +1 ( a j α ( p ) k ( n, j ) + b j θ ( p ) k ( n, j )) ,G k = ∞ X j = n +1 ( a j β ( p ) k ( n, j ) + b j ω ( p ) k ( n, j )) . By (5.2), (5.6) and the Cauchy inequality, we have H ≤ C ( p )1 2 + C ( p )4 2 n ∞ X j = n +1 ( a j + b j ) , (5.10) H k ≤ C ( p )2 2 + C ( p )5 2 n ∞ X j = n +1 ( a j + b j ) , (5.11) G k ≤ C ( p )3 2 + C ( p )6 2 n ∞ X j = n +1 ( a j + b j ) . (5.12) (5.10),(5.11),(5.12) together with (5.9) give that, for all v such that k v k L = 1 , k K ( p ) n v k L ≤ ( X i =1 C ( p ) i ) ∞ X j = n +1 ( a j + b j ) ≤ ( X i =1 C ( p ) i ) k v k L ( κ ( p ) := vuut X i =1 C ( p ) i ) . where C ( p ) i ( p = 1 , , i = 1 , , · · · , are all constants defined in Lemma 5.1 andLemma 5.2. Theorem 5.1
The Groetsch regularity holds for Galerkin setting (3.2); that is, sup n k R ( p ) n k L −→ L ≤ γ ( p ) < ∞ ( γ ( p ) := 1 + κ ( p ) ) , where R ( p ) n := A ( p ) n † P n A ( p ) : L (0 , π ) −→ X n ⊆ L (0 , π ) and A ( p ) , A ( p ) n are defined in (2.4), (3.3) respectively. Proof 7
Since A ( p ) n † P n A ( p ) = K ( p ) n + A ( p ) n † P n A ( p ) P n = K ( p ) n + A ( p ) n † A ( p ) n = K ( p ) n + P N ( A ( p ) n ) ⊥ n = K ( p ) n + P n ( Since N ( A ( p ) n ) = 0 in Remark 4.1 ) , by Lemma 5.3 we have k A ( p ) n † P n A ( p ) k L → L ≤ γ ( p ) , n ∈ N , γ ( p ) := 1 + κ ( p ) . After the examination of (5.1), we have an estimate on the approximation error. alerkin Method with Trigonometric Basis on Stable Numerical Differentiation Corollary 5.1
For A ( p ) n defined as (3.3), we have k A ( p ) n † P n y − y ( p ) k L ≤ ( γ ( p ) + 1) k ( I − P n ) y ( p ) k L −→ n → ∞ ) for every y ∈ H p (0 , π ) . Furthermore, with a priori information y ( p ) ∈ H lper (0 , π ) , ityields that k A ( p ) n † P n y − y ( p ) k L ≤ ( γ ( p ) + 1) n k y ( p ) k H lper , where γ ( p ) is constant given in Theorem 5.1. Proof 8
By Lemma 2.2, 2,3, for y ∈ H p (0 , π ) , k A ( p ) n † P n y − y ( p ) k L = k A ( p ) n † P n P R ( A ( p ) ) y − A ( p ) † y k L . Using Lemma 2.4, Remark 4.1, Theorem 5.1, it yields that k A ( p ) n † P n y − y ( p ) k L ≤ ( γ ( p ) + 1) k ( I − P n ) y ( p ) k L . (5.13) Now provided with a priori information y ( p ) ∈ H lper (0 , π ) , l > . It yields that k ( I − P n ) y ( p ) k L ≤ n l k y ( p ) k H lper from Lemma 2.5. This improves (5.13) to the latter result needed. Corollary 5.2
For A ( p ) n defined as (3.3), we have k A ( p ) n † P n y k L −→ ∞ ( n → ∞ ) for every y ∈ L (0 , π ) \ R ( A ( p ) ) , where R ( A ( p ) ) = H p (0 , π ) . Proof 9
Lemma 2.4 tells that D ( A ( p ) † ) = R ( A ( p ) ) . Since the estimate of (5.1) holds,with Lemma 2.4 (b), the result surely holds. Here Corollary 5.2 tells us two questions: • the first question is, for p order numerical differentiation, when y ∈ H p (0 , π ), withinterfuse of noise δy , y δ would generally locate in L (0 , π ) \ H p (0 , π ). Then withincreasing choice of index n independent of noise level δ , k A ( p ) n † P n y δ k L −→ ∞ ( n → ∞ )This fact shows that, without proper parameter choice strategy for n ( p ) := n ( p ) ( δ ),numerical scheme constructed as A ( p ) n † P n y δ is natively instable. • the second question is a worse one. With the more general setting y ∈ H p (0 , π )for p order Numerical differentiation, if y ∈ H p (0 , π ) \ H p (0 , π ), then, with anyparameter choice strategy n ( p ) := n ( p ) ( δ ) such that n ( p ) ( δ ) → + ∞ ( δ → + ),approximation error e ( p ) A = k A ( p ) n ( p ) ( δ ) † P n ( p ) ( δ ) y − y ( p ) k L −→ ∞ ( δ → + ) . In addition with estimate on noise error e ( p ) N ≤ C ( p ) n ( p ) ( δ ) → ∞ ( δ → + ) (by(4.10)), one could see the invalidness of single regularization parameter choice sinceonly adjusting parameter choice n = n ( p ) ( δ ) can not gives e ( p ) T → δ → + ).The following two sections will answer above two questions respectively. alerkin Method with Trigonometric Basis on Stable Numerical Differentiation
6. Total Error Estimate for y ∈ H p (0 , π ) and Parameter Choice forRegularization To solve the first question, we introduce regularization in the following procedure:Combining Corollary 5.1 with Remark 4.2, it gives
Theorem 6.1
Set A ( p ) n as (3.3), y ∈ H p (0 , π ) . Then k A ( p ) n † P n y δ − y ( p ) k L (0 , π ) ≤ C ( p ) n p δ + k ( I − P n ) y ( p ) k L . (6.1) Furthermore, with a priori information y ( p ) ∈ H lper (0 , π ) , k A ( p ) n † P n y δ − y ( p ) k L (0 , π ) ≤ C ( p ) n p δ + ( γ ( p ) + 1) 1 n l k y ( p ) k H lper . (6.2) Remark 6.1
In the case that y ∈ H p (0 , π ) is provided but no a priori information onexact solution y ( p ) , we determine parameter choice strategy from (6.1) as n ( p )1 := n ( p )1 ( δ ) = κδ a − p , (6.3) where a ∈ (0 , p ) is optional. However, we specify that, in this case, the convergence ratecan not be obtained higher than O (1) .In the case that y ∈ H p (0 , π ) and y ( p ) ∈ H lper (0 , π ) , we could determine parameterchoice strategy from (6.2) as n ( p )2 = n ( p )2 ( δ ) = ( l ( γ ( p ) + 1) k y ( p ) k H lper pC ( p ) ) l + p δ − l + p . (6.4) Hence it follows that k A ( p ) n ( p )2 ( δ ) † P n ( p )2 ( δ ) y δ − y ( p ) k L (0 , π ) ≤ Γ p k y ( p ) k pl + p H lper δ ll + p , (6.5) where Γ p := (( lp ) pl + p + ( lp ) − l + p )( C ( p ) ) ll + p ( γ ( p ) + 1) pl + p . Remark 6.2
Assume that y ( p ) ∈ H µpper (0 , π ) ( µ = 12 or . (6.6) Choosing n ( p )3 = n ( p )3 ( δ ) = δ − µ +1) p , we gain the optimal convergence rate from (6.5),that is, k A ( p ) n ( p )3 ( δ ) † P n ( p )3 ( δ ) y δ − y ( p ) k = O ( δ µ µ +1 ) . Here we specify that (6.6) is a slightly variant version of the standard source conditionstated in [10], that is, y ( p ) ∈ R ( A ( p ) ∗ A ( p ) ) µ ( µ = or , where R ( A ( p ) ∗ A ( p ) ) = R ( A ( p ) ∗ ) = H p π (0 , π ) , R ( A ( p ) ∗ A ( p ) ) = ˙ H p (0 , π ) . Notice that
Codim H p H p π (0 , π ) = Codim H p H pper (0 , π ) = p and Codim H p ˙ H p (0 , π ) = Codim H p H pper (0 , π ) = 2 p. alerkin Method with Trigonometric Basis on Stable Numerical Differentiation Remark 6.3
Assume that noise level δ range in any closed interval [ δ , δ ] ⊆ (0 , + ∞ ) .If y ∈ H p (0 , π ) and y ( p ) ∈ H ∞ per (0 , π ) such that lim l →∞ k y ( p ) k l + p H lper exists, then theregularization parameter is determined by (6.4) as n ( p ) = lim l →∞ k y ( p ) k l + p H lper , (6.7) which only depends on exact derivative y ( p ) , not concerned with noise level δ .Furthermore, in the case that y ∈ H p (0 , π ) and y ( p ) = a + N X k =1 b k cos kt + N X k =1 c k sin kt ∈ H ∞ per (0 , π ) , (6.8) where b N , c N = 0 . The regularization parameter is determined by (6.7) as n ( p ) = max( N , N ) , (6.9) which only depends on the highest frequency of trigonometric polynomial in (6.8), notconcerned with noise level δ .
7. Extended Numerical Differentiation on H p (0 , π ) x = 0Theorem 6.1 provides a result of stable numerical differentiation on y ∈ H p (0 , π ), where H p (0 , π ) := { y ∈ H p (0 , π ) , y (0) = · · · = y ( p − (0) = 0 } . We consider to remove the restriction on initial value data, and extend the result intocase y ∈ H p (0 , π ).Observing that, for y ∈ H p (0 , π ), y ( x ) − p − X k =0 y ( k ) (0) k ! x k ∈ H p (0 , π ) , we naturally adjust regularized scheme (1.3) into A ( p ) n † P n ( y δ − p − X k =0 y ( k ) (0) k ! x k ) . Now, given exact measurements on the initial value data, y (0) , y ′ (0) , · · · , y ( p − (0) . we can adjust Theorem 6.1 into the following version. Theorem 7.1
Set A ( p ) n as (3.3), y ∈ H p (0 , π ) . Then k A ( p ) n † P n ( y δ − p − X k =0 y ( k ) (0) k ! x k ) − y ( p ) k L ≤ C ( p ) n p δ + ( γ ( p ) + 1) k ( I − P n ) y ( p ) k L . Furthermore, with a priori information y ( p ) ∈ H lper (0 , π ) , k A ( p ) n † P n ( y δ − p − X k =0 y ( k ) (0) k ! x k ) − y ( p ) k L ≤ C ( p ) n p δ + ( γ ( p ) + 1) 1 n l k y ( p ) k H lper . alerkin Method with Trigonometric Basis on Stable Numerical Differentiation x = 0However, in practical cases, one can not obtain initial value data y (0) , y ′ (0) , y ′′ (0)exactly. Instead, one could only obtain a cluster of noisy data, denoted asΛ (0) , Λ (0) , Λ (0) respectively. Now provided with above endpoint measurement, wereformulate the problem of p order numerical differentiation as: Problem 7.1
Assume that we have • y ∈ H p (0 , π ) and y δ measured on (0 , π ) , which belongs to L (0 , π ) such that k y δ − y k L ≤ δ , • Noisy initial value data Λ (0) , · · · , Λ p − (0) for y (0) , · · · , y ( p − (0) respectively, whichsatisfies that | Λ k (0) − y ( k ) (0) | ≤ δ i , k = 0 , · · · , p − . How to gain stable approximation to y ( p ) ? An estimate similar to (6.2) is constructed to answer this question:
Theorem 7.2
Set A ( p ) n as (3.3), y ∈ H p (0 , π ) and y ( p ) ∈ H lper (0 , π ) . Then k A ( p ) n † P n ( y δ − ( p − X k =0 Λ k (0) k ! x k )) − y ( p ) k L ≤ C ( p ) n p δ + ∆ p C ( p ) n p δ i + ( γ ( p ) + 1) 1 n l k y ( p ) k H lper For convenience of notations, we set δ i = δ , then it follows that k A ( p ) n † P n ( y δ − ( p − X k =0 Λ k (0) k ! x k )) − y ( p ) k L ≤ C ( p )∆ n p δ + ( γ ( p ) + 1) 1 n l k y ( p ) k H lper , where C ( p )∆ := (∆ p + 1) C ( p ) and ∆ p := p − P k =0 k x k k L k ! . Remark 7.1
In this case, it is necessary to specify that the parameter choice strategyshould be adjusted from (6.4) to the following, n ( p ) = n ( p ) ( δ ) = ( l ( γ ( p ) + 1) k y ( p ) k H lper pC ( p )∆ ) l + p δ − l + p . (7.1) Also, assume that noise level δ range in the interval ( δ , δ ) , where < δ << and δ < + ∞ . When y ∈ H p (0 , π ) and y ( p ) ∈ H ∞ per such that lim l →∞ k y ( p ) k l + p H lper exists, theregularization parameter is determined by (7.1) as n ( p ) = lim l →∞ k y ( p ) k l + p H l , (7.2) alerkin Method with Trigonometric Basis on Stable Numerical Differentiation which remains not concerned with noise level δ , also not concerned with the additionalnoise level δ i of initial value data. Besides, in the case that y = p − X k =0 a k x k + N X k =1 b k cos kt + N X k =1 c k sin kt, where b N , c N = 0 . The optimal parameter choice is determined by (7.2) as n ( p ) = max( N , N ) , (7.3) which is just the same as (6.9), still not concerned with noise level δ and additionalnoise in initial value data. Remark 7.2
The optimal convergence rate O ( δ µ µ +1 ) can be achieved in the same wayas Remark 6.2. Proof 10
For y ∈ H p (0 , π ) and y δ ∈ L [0 , π ] with k y δ − y k ≤ δ , k A ( p ) n † P n ( y δ − ( p − X k =0 Λ k (0) k ! x k )) − y ( p ) k L ≤ e ′ T + e ′ P , where e ′ T := k A ( p ) n † P n ( y δ − p − X k =0 y ( k ) (0) k ! x k ) − ( y − p − X k =0 y ( k ) (0) k ! x k ) ( p ) k L ,e ′ P := k A ( p ) n † P n ( p − X k =0 y ( k ) (0) k ! x k − p − X k =0 Λ k (0) k ! x k ) k L . Apply Theorem 7.1, and it follows that e ′ T ≤ C ( p ) n p δ + ( γ ( p ) + 1) 1 n l k y ( p ) k H lper . Besides, e ′ P ≤ k A ( p ) n † kk P n k p − X k =0 δ e k ! k x k k L ≤ ∆ p C ( p ) n p δ i , where ∆ p := p − P k =0 k x k k L k ! . Then we have k A ( p ) n † P n ( y δ − ( p − X k =0 Λ k (0) k ! x k )) − y ( p ) k L ≤ C ( p ) n p δ + ∆ p C ( p ) n p δ i + ( γ ( p ) + 1) 1 n l k y ( p ) k H lper . alerkin Method with Trigonometric Basis on Stable Numerical Differentiation
8. Numerical Experiments
All experiments are performed in Intel(R) Core(TM) i7-7500U CPU @2.70GHZ 2.90GHZ Matlab R 2017a. For all experiments, the regularized solution is given by ϕ p,δ,δ i n := A ( p ) n † P n ( y δ − Λ ( p ) ( x )) , with regularization parameter choice n = n ( p ) = n ( p ) ( δ, δ i )( p = 1 , , δy = δ sin kx √ π , y δ ( x ) = y ( x ) + δy. and Λ ( p ) ( x ) = p − X k =0 Λ k (0) k ! x k with Λ k (0) = y ( k ) (0) + δ i , k ∈ , , · · · , p − . All experiments are divided into two cases: • Case I : δ = 0 , δ i = 0, that is, high frequency noise δy and exact initial value data. • Case II : δ = δ i = 0, that is, high frequency noise δy and noisy initial value data.The following index is introduced to measure the computational accuracy in tests: • Relative error r = k ϕ p,δ,δ i n ( p ) ( δ,δ i ) − y ( p ) k L k y ( p ) k L . Example 8.1
Set p ( x ) = X k =1 k sin( kx ) , q i ( x ) = p − X i =0 ∗ x i , p = 1 , , y i ( x ) = p ( x ) + q i ( x ) , y δi ( x ) = y i ( x ) + δ sin 12 x √ π . We use the y δi as test function for i order numerical differentiation. Notice that y ′ ( x ) = X k =1 k cos( kx ) , y ′′ ( x ) = − X k =1 sin( kx ) ,y ′′′ ( x ) = − X k =1 k cos( kx ) .y (0) = 1 , y (0) = 1 , y ′ (0) = 1 + X k =1 ky (0) = 1 , y ′ (0) = 1 + X k =1 k , y ′′ (0) = 2 . alerkin Method with Trigonometric Basis on Stable Numerical Differentiation n p = 1 δ i = 0 r . e − . e − δ i = 0 . r . . p = 2 δ i = 0 r . e − . e − δ i = 0 . r . p = 3 δ i = 0 r . e − . e − δ i = 0 . r . . δ, δ i ) = (0 . ,
0) and ( δ, δ i ) = (0 . , .
01) respectively. Noticethat r denotes the relative error. n p = 1 δ i = 0 t δ i = 0 . t p = 2 δ i = 0 t δ i = 0 . t p = 3 δ i = 0 t δ i = 0 . t t denotes the CPU time ( s ) for the corresponding experiment in Table 1. We first investigate into thecase ( δ, δ i ) = (0 . , V = { , } , V = { , } , V = { } .We can compare above three phases and quickly find that when n ∈ V , therelative error r is the least and almost approaches 0. This displays the goodfiltering effect of algorithm on specific class of functions (the sum of trigonometricpolynomial and polynomial of order less than order p , where p denotes the order ofnumerical differentiation), and also correspond to the fact indicated by (6.9): the bestregularization parameter n = 6.Now we explain the source of the good filtering effect in case p = 2, the other casesare similar. • The exact measurement of initial value data help give a precise Taylor polynomialtruncation to eliminate the polynomial term q ( x ) in y δ , thus the computationalscheme is transformed into A (2) n † P n (¯ p ( x ) + δy ), where ¯ p ( x ) = p ( x ) − P k =1 1 k x . • The parameter choice n = 6 appropriately eliminate the noise component δy , now A (2)6 † P (¯ p ( x ) + δy ) = A (2)6 † P ¯ p ( x ). Notice that ¯ p ( x ) ∈ R ( A (2) ), A (2) † ¯ p ( x ) = y ′′ , A (2)6 y ′′ = P A (2) P y ′′ = P A (2) y ′′ = P A (2) A (2) † ¯ p ( x ) = P P R ( A (2) ) ¯ p ( x ) = P ¯ p ( x ) by (2 . . alerkin Method with Trigonometric Basis on Stable Numerical Differentiation A (2)6 † P ¯ p ( x ) = y ′′ ,that is, approximate solution strictly equals to the exact solution in this case, thegood filtering effect appears.The deviation of the accuracy in V and V can also be explained in the same way. Theformer case of choice n = 2 with lower accuracy is due to that P n y δ , n = 2 does notcover the major part of p ( x ), but with the increase of n ∈ V , the coverage increasesand hence the accuracy improves. As to the latter case V , now the δy = sin 12 x √ π comeinto the computation, thus the good filtering effect disappears.For case II with noise pair ( δ, δ i ) = (0 . , . n ∈ V , the corresponding relative error r all approachor exceed 0.01 uniformly. Now, compared to the case I with the same choice forregularization parameter, the good filtering effect of case I are strongly weakened. Thisis because the noise δ i in initial value data bring a not complete Taylor polynomialtruncation and hence parameter n = 6 can not eliminate the lower-frequency noisecomponents inΛ ( p ) ( x ) − p − X k =0 y ( k ) (0) k ! x k . In this subsection, we mainly investigate the effectiveness of parameter choice (6.4) and(7.1) for general case (compared with the case in subsection 8.1).
Example 8.2
Set y ( x ) = ( πx − x , ≤ x < π, x − πx + π , π ≤ x < π.y (0) = 0 ,y δ ( x ) = y ( x ) + δ sin kx √ π ,y ′ ( x ) = ( π − x, ≤ x < π,x − π, π ≤ x < π. ∈ H per (0 , π ) Example 8.3
Set y ( x ) = ( πx − x , ≤ x < π, x − πx + π x − π , π ≤ x < π.y (0) = 0 , y ′ (0) = 0 alerkin Method with Trigonometric Basis on Stable Numerical Differentiation y δ ( x ) = y ( x ) + δ sin kx √ π ,y ′′ ( x ) = ( π − x, ≤ x < π,x − π, π ≤ x < π. ∈ H per (0 , π ) Example 8.4
Set y ( x ) = ( πx − x , ≤ x < π, x − πx + π x − π x + π , π ≤ x < π.y (0) = 0 , y ′ (0) = 0 , y ′′ (0) = 0; y δ ( x ) = y ( x ) + δ sin kx √ π ,y ′′′ ( x ) = ( π − x, ≤ x < π,x − π, π ≤ x < π. ∈ H per (0 , π ) x de r i v a t i v e s Figure 1: The figure corresponds to Example 8.2 where the bule curve denotes theexact derivative, the red, yellow, green curves denote the case ( δ, δ i , n ) = (0 . , , δ, δ i , n ) = (0 . , ,
10) and ( δ, δ i , n ) = (0 . , ,
23) and the lightcyan, manganese purple,black curves denote the case ( δ, δ i , n ) = (0 . , . , δ, δ i , n ) = (0 . , . ,
5) and( δ, δ i , n ) = (0 . , . ,
12) respectively. alerkin Method with Trigonometric Basis on Stable Numerical Differentiation x de r i v a t i v e s Figure 2: The figure corresponds to Example 8.3 where the bule curve denotes theexact derivative, the red, yellow, green curves denote the case ( δ, δ i , n ) = (0 . , , δ, δ i , n ) = (0 . , ,
3) and ( δ, δ i , n ) = (0 . , ,
6) and the lightcyan, manganese purple,black curves denote the case ( δ, δ i , n ) = (0 . , . , δ, δ i , n ) = (0 . , . ,
3) and( δ, δ i , n ) = (0 . , . ,
4) respectively.
In thissubsection, we utilize strategies (6.4) and (7.1) to determine regularization parameterfor two cases with different noise level respectively. The figures 1-3 show the goodeffectiveness of strategy proposed in this paper in a general aspect.
In the following numerical examples with non-periodic discontinuous derivatives, wechoose to adjust parameter n ( p ) = n ( p ) ( δ, δ i ) , p = 1 , , , with experiments, not by (6.3) for the uncertainties to determine κ . Figurescorresponding to least-error r in numerical differentiation of each order are attached. Example 8.5
Set y ( x ) = x, ≤ x < , , ≤ x < , − x , ≤ x < π. alerkin Method with Trigonometric Basis on Stable Numerical Differentiation x de r i v a t i v e s Figure 3: The figure corresponds to Example 8.4 where the bule curve denotes theexact derivative, the red, yellow, green curves denote the case ( δ, δ i , n ) = (0 . , , δ, δ i , n ) = (0 . , ,
1) and ( δ, δ i , n ) = (0 . , ,
2) and the lightcyan, manganese purple,black curves denote the case ( δ, δ i , n ) = (0 . , . , δ, δ i , n ) = (0 . , . ,
1) and( δ, δ i , n ) = (0 . , . ,
2) respectively. y (0) = 0 ,y δ ( x ) = y ( x ) + δ sin kx √ π ,y ′ ( x ) = , ≤ x < , , ≤ x < , − , ≤ x < π. Example 8.6
Set y ( x ) = x − x , ≤ x < ,x − x, ≤ x < , − x − , ≤ x < π.y (0) = 0 , y ′ (0) = 0 ,y δ ( x ) = y ( x ) + δ sin kx √ π , alerkin Method with Trigonometric Basis on Stable Numerical Differentiation n p = 1 δ i = 0 r δ i = 0 . r . . p = 2 δ i = 0 r δ i = 0 . r p = 3 δ i = 0 r δ i = 0 . r δ, δ i ) = (0 . ,
0) and ( δ, δ i ) = (0 . , .
01) respectively. Noticethat r denotes the relative error. ( k = 8) n p = 1 δ i = 0 t δ i = 0 . t p = 2 δ i = 0 t δ i = 0 . t p = 3 δ i = 0 t δ i = 0 . t t denotes the CPU time ( s ) for the corresponding experiments in Table 3. y ′′ ( x ) = x − , ≤ x < , , ≤ x < , , ≤ x < π. Example 8.7
Set y ( x ) = x + x , ≤ x < , x − x + 64 x, ≤ x < , x − x + 2808 , ≤ x < π.y (0) = 0 , y ′ (0) = 0 , y ′′ (0) = 0 , and y δ ( x ) = y ( x ) + δ sin kx √ π ,y ′′′ ( x ) = x + 6 , ≤ x < , , ≤ x < , , ≤ x < π. alerkin Method with Trigonometric Basis on Stable Numerical Differentiation x -0.6-0.4-0.200.20.40.60.81 de r i v a t i v e s Figure 4: The figure corresponds to Example 8.2 where the bule curve denotes theexact derivative, and the red, black curves denote the case ( δ, δ i , n ) = (0 . , ,
24) and( δ, δ i , n ) = (0 . , . ,
24) respectively. x -10-50510 de r i v a t i v e s Figure 5: The figure corresponds to Example 8.3 where the blue curve denotes theexact derivative, and the red, black curve denote the case ( δ, δ i , n ) = (0 . , ,
24) and( δ, δ i , n ) = (0 . , . ,
16) respectively. alerkin Method with Trigonometric Basis on Stable Numerical Differentiation x de r i v a t i v e s Figure 6: The figure corresponds to Example 8.4 where the blue curve denote theexact derivative, and the red, black curves denote the case ( δ, δ i , n ) = (0 . , ,
16) and( δ, δ i , n ) = (0 . , . ,
8) respectively.
It can be concludedfrom figure 4,5,6 that, in both cases, when regularization parameters are chosenappropriately, the computational error can be well controlled. However, for sake ofthe intersection of frequency band of y and δy (this does not happen in the example welist in the subsection 8.1), the good filtering effect disappears in case with discontinuousderivative. Besides, we note that when the choice of n increases to U := { , } , theCPU time will increase to a considerable amount.
9. Conclusion
The core theoretical work of this paper locates in the uniform upper estimate for k A ( p ) n † P n A ( p ) k L → L where A ( p ) , A ( p ) n are defined in (2.4), (3.3) respectively. This determines the errorestimate for approximation error and give a complete answer to regularization procedure.In experiments, the algorithm has its advantage over other classical regularizationmethod: • It induces a noise-independent a-priori parameter choice strategy for function of a alerkin Method with Trigonometric Basis on Stable Numerical Differentiation y ( t ) = N X k =1 a k cos kt + N X k =1 b k sin kt + p − X k =0 c k t k where p is the order of numerical differentiation. Good filtering effect (errorapproaches 0) is displayed when the algorithm acts on functions of above classwith best parameter choice. • Derivatives discontinuities can also be recovered well although there exists aunknown constant κ to test in experiments. Appendix A. Representation of M ( p ) n M ( p ) n = a ( p ) u ( p )1 · · · u ( p ) n v ( p )1 T M ( p )11 · · · M ( p )1 n ... ... . . . ... v ( p ) n T M ( p ) n · · · M ( p ) nn (2 n +1) × (2 n +1) , p = 1 , , . where a (1) = π, a (2) = 2 π , a (3) = π u (1) k = (0 , √ k ) , v (1) k = − u (1) k u (2) k = ( √ k , √ πk ) , v (2) k = ( √ k , − √ πk ) u (3) k = ( √ πk , √ π · k − √ k ) , v (3) k = ( √ πk , − √ π · k + √ k ) , ≤ k ≤ nM (1) ij = 0 , ∀ i = j,M (1) ii = − i i ! , ≤ i ≤ n.M (2) ij = − i · j ! , ≤ i, j ≤ n, i = j.M (2) ii = − i − i ! , ≤ i ≤ n.M (3) ij = i · j − i · j − πi · j , ≤ i, j ≤ n, i = j.M (3) ii = i − i − πi ! , ≤ i ≤ n. alerkin Method with Trigonometric Basis on Stable Numerical Differentiation Appendix B. Some Fourier expansionsLemma Appendix B.1
For A ( p ) defined in (2.4), when j ≥ n + 1 , set ( P n A ( p ) ( cos jt √ π ))( x ) = c ( p )0 √ π + n X k =1 c ( p ) k cos kx √ π + n X k =1 d ( p ) k sin kx √ π , Then Fourier coefficients are determined as follows: c (1)0 = c (1) k = d (1) k = 0 , k ∈ , ..., n, (B.1) c (2)0 = √ j , c (2) k = 0 , d (2) k = 0 , k ∈ , ..., n, (B.2) c (3)0 = √ πj , c (3) k = 0 , d (3) k = − kj , k ∈ , ..., n. (B.3) Lemma Appendix B.2
For A ( p ) defined in (2.4), when j ≥ n + 1 , set ( P n A ( p ) ( sin jt √ π ))( x ) = s ( p )0 √ π + n X k =1 s ( p ) k cos kx √ π + n X k =1 t ( p ) k sin kx √ π , Then Fourier coefficients are determined as follows s (1)0 = √ j , s (1) k = 0 , t (1) k = 0 , k ∈ , ..., n, (B.4) s (2)0 = √ πj , s (2) k = 0 , t (2) k = − kj , k ∈ , ..., n, (B.5) s (3)0 = 2 √ π j − √ j , s (3) k = 2 k j , t (3) k = − πkj k ∈ , ..., n. (B.6) Appendix C. Some InequalitiesProposition Appendix C.1
Set L n , K n , F n , T n defined as in Theorem 4.1. Then L − n ∈ [10 n, n ] , (C.1) K n ∈ [ − n , − n ] , (C.2) F n ∈ [ √ n (2 n + 1) , √ n (2 n + 1) ] , (C.3) T n ∈ [ 1396 1 n n + 1 ,
340 1 n (2 n + 1) ] , n ≥ . (C.4) Acknowledgement
I would like to thank Professor Nailin Du for his helpful discussions and suggestion thatextend numerical differentiation into higher order. And the author thank Professor HuaXiang for his comments. Besides, I really appreciate the help in figure modificationfrom Nianci Wu. The work of Yidong Luo is supported by the National Natural ScienceFoundation of China under grants 11571265 and NSFC-RGC No. 11661161017. alerkin Method with Trigonometric Basis on Stable Numerical Differentiation References [1] A. Adams, J. F. Fournier:
Sobolev Spaces. Second Edition , Academic Press, 2003.[2] A. Israel, T. Greville:
Generalized Inverses Theory and Applications. Second Edition , Springer-Verlag, New York, 2003.[3] A. Kirsch:
An Introduction to the Mathematical Theory of Inverse Problems . Springer, New York,1996.[4] A. Ramm, A. Smirnova:
On stable numerical differentiation . Math. Comput. 70, 1131-1153 (2001).[5] C. Evans, F. Gariepy:
Measure theory and fine properties of functions , Studies in AdvancedMathematics. CRC Press, Boca Raton, FL, 1992.[6] C. Groetsch:
Generalized inverses of Linear Operators , Marcel Dekker, New York, 1977.[7] C. Groetsch:
On a regularization-Ritz method for Fredholm equations of the first kind . J. IntegralEquations 4, 173-182 (1982).[8] C. Groetsch:
The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind .Pitman, Boston, 1984.[9] F. Dou, C. Fu, Y. Ma:
A wavelet-Galerkin method for high order numerical differentiation . Appl.Math. Comput. 215, 3702-3712 (2010).[10] H. Engl, M. Hanke, A. Neubauer:
Regularization of Inverse Problems . Kluwer, Dordrecht, 1996.[11] H. Egger, H. Engl:
Tikhonov regularization applied to the inverse problem of option pricing:convergence analysis and rates . Inverse Problems 21, 1027-1045 (2005).[12] J. Cullum:
Numerical differentiation and regularization . SIAM J. Numer. Anal. 8, 254-265 (1971).[13] M. Dehghan:
Quasi-implicit and two-level explicit finite-difference procedures for solving the one-dimensional advection equation . Applied Mathematics and Computation 167 (2005) 46-67.[14] M. Dehghan:
Numerical solution of the three-dimensional advection-diffusion equation . AppliedMathematics and Computation 150 (2004) 5-19.[15] M. Dehghan, M. Abbaszadeh:
A finite difference/finite element technique with error estimatefor space fractional tempered diffusion-wave equation . Computers and Mathematics withApplications 75 (8), (2018) 2903-2914.[16] M. Dehghan:
Finite difference procedures for solving a problem arising in modeling and design ofcertain optoelectronic devices . Mathematics and Computers in Simulation 71 (2006) 16-30.[17] M. Hanke, O. Scherzer:
Inverse problems light: numerical differentiation . Am. Math. Mon. 108,512-521 (2001).[18] N. Du:
Finite-dimensional approximation settings for infinite-dimensional Moore-Penroseinverses . SIAM J. Numer. Anal. 46, 1454-1482 (2008).[19] P. Assari, M. Dehghan:
Solving a class of nonlinear boundary integral equations based on themeshless local discrete Galerkin (MLDG) method . Applied Numerical Mathematics 123 (2018)137-158.[20] P. Assari, M. Dehghan:
A Meshless Galerkin Scheme for the Approximate Solution ofNonlinear Logarithmic Boundary Integral Equations Utilizing Radial Basis Functions . Journalof Computational and Applied Mathematics 333 (2018) 362-381.[21] P. Assari, M. Dehghan:
A Meshless Local Discrete Galerkin (MLDG) Scheme for NumericallySolving Two-Dimensional Nonlinear Volterra Integral Equations . Applied Mathematics andComputation 350 (2019) 249-265.[22] P. Assari:
A Meshless Local Galerkin Method for the Numerical Solution of Hammerstein IntegralEquations Based on the Moving Least Squares Technique . Journal of Applied Analysis andComputation 9 (2019) 75-104.[23] P. Mathe, S. Pereverzev:
The use of higher order finite difference schemes is not dangerous . Journalof Complexity. 25, 3-10 (2009).[24] R. Kress:
Linear Integral Equations , Springer-Verlag, Berlin, 1989.[25] T. Hein, B. Hofmann:
On the nature of ill-posedness of an inverse problem arising in optionpricing . Inverse Problems 19, 1319-1338 (2003). alerkin Method with Trigonometric Basis on Stable Numerical Differentiation [26] T. Wei, Y. Hon, Y. Wang: Reconstruction of numerical derivatives from scattered noisy data .Inverse Problems 21, 657-672 (2005).[27] X. Wan, Y. Wang and M Yamamoto:
Detection of irregular points by regularization in numericaldifferentiation and application to edge detection . Inverse Problems 22, 1089-1103 (2006).[28] Y. Wang, X. Jia, J. Cheng: