Design of accurate formulas for approximating functions in weighted Hardy spaces by discrete energy minimization
aa r X i v : . [ m a t h . NA ] J a n Design of accurate formulas for approximating functions inweighted Hardy spaces by discrete energy minimization
Ken’ichiro Tanaka ∗ , Masaaki Sugihara † January 13, 2018
Abstract
We propose a simple and effective method for designing approximation formulas forweighted analytic functions. We consider spaces of such functions according to weightfunctions expressing the decay properties of the functions. Then, we adopt the minimumworst error of the n -point approximation formulas in each space for characterizing theoptimal sampling points for the approximation. In order to obtain approximately optimalsampling points, we consider minimization of a discrete energy related to the minimumworst error. Consequently, we obtain an approximation formula and its theoretical errorestimate in each space. In addition, from some numerical experiments, we observe that theformula generated by the proposed method outperforms the corresponding formula derivedwith sinc approximation, which is near optimal in each space. Keywords: weighted Hardy space; function approximation; potential theory; discrete energy mini-mization; barycentric form.
In the recent paper [17], the authors proposed a method for designing interpolation formulason R for approximating functions in spaces of weighted analytic functions. They consideredthe weighted Hardy space defined by H ∞ ( D d , w ) := { f : D d → C | f is analytic on D d and k f k < ∞ } , (1.1)where d > D d = { z ∈ C | | Im z | < d } , w is a weight function with w ( z ) = 0 for any z ∈ D d ,and k f k := sup z ∈D d (cid:12)(cid:12)(cid:12)(cid:12) f ( z ) w ( z ) (cid:12)(cid:12)(cid:12)(cid:12) . (1.2)In this study, we drastically simplify the method and obtain approximation formulas in thespaces H ∞ ( D d , w ), competitive with the formulas reported previously. Furthermore, webroaden the class of the weight functions w to which the method is applicable. ∗ Department of Mathematical Informatics, Graduate School of Information Science and Technology, Univer-sity of Tokyo. 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan. [email protected] † Department of Physics and Mathematics, College of Science and Engineering, Aoyama Gakuin University.5-10-1, Fuchinobe, Chuo-ku, Sagamihara-shi, Kanagawa 252-5258, Japan. H ∞ ( D d , w ) is important as a space of transformed functions that often appear intransformation-based formulas for function approximation in sinc numerical methods [10, 11,12, 19]. These numerical methods are based on the sinc function sinc( x ) = (sin πx ) / ( πx ) witha useful variable transformation ψ . The building block of the method is the sinc approximationgiven by f ( x ) ≈ N + X k = − N − f ( kh ) sinc( x/h − k ) , (1.3)where h >
0. Usually, we consider approximation of an analytic function g defined on a domain D ⊂ C . Then, we employ a map ψ : D d → D as a variable transformation and apply the sincapproximation in (1.3) to the transformed function f ( x ) = g ( ψ ( x )) for x ∈ R . If the function g has a decay property, the map ψ achieves the decay of function f , which enables us to select N − and N + to be small in the sum in (1.3). Owing to this simple principle, the sinc interpolationis useful for various numerical methods. Typical maps ψ = ψ i used as such transformationsare : ψ ( x ) = tanh (cid:16) x (cid:17) (TANH transformation [5, 9, 10, 11, 13]) , (1.4) ψ ( x ) = tanh (cid:16) π x (cid:17) (DE transformation [14]) , (1.5)where “DE” denotes “double exponential”. Therefore, we consider a weight function w torepresent the decay property of f given by ψ .Sugihara [12] showed that the sinc interpolation was a “nearly optimal” approximation in H ∞ ( D d , w ) for several weight functions w . He adopted minimum worst error E min n ( H ∞ ( D d , w ))of an n -point approximation formula in H ∞ ( D d , w ), whose definition is given later in (2.3),and showed that the error in the sinc interpolation for the functions in H ∞ ( D d , w ) was closeto E min n ( H ∞ ( D d , w )). However, an explicit optimal formula attaining E min n ( H ∞ ( D d , w )) isknown only in a limited case [16].In the paper [17], with a view to an optimal formula in H ∞ ( D d , w ) for a general weightfunction w , the authors started with the expression E min n ( H ∞ ( D d , w )) = inf a j ∈ R sup x ∈ R (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) w ( x ) n Y j =1 tanh (cid:16) π d ( x − a j ) (cid:17)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (1.6)given in [12], and employed fundamental tools in potential theory [8] to obtain an accurateapproximation formula. Their method was based on approximating the equilibrium measurethat minimized the weighted Green energy by considering the integral equation correspondingto a Frostman type characterization of the equilibrium measure. The integral equation wasslightly complex because it contained unknown parameters representing the support of theequilibrium measure. For simplicity, the authors limited their study to weight functions thatare even on R [17, Assumption 2.2]. Subsequently, they used some heuristic techniques toderive an approximate density function for each equilibrium measure, and obtained a sequenceof sampling points for interpolation by discretizing the density function.In this paper, we propose a simplified method for obtaining sampling points for approx-imating functions in H ∞ ( D d , w ). This method is based on discrete energy minimization, These maps are also used for numerical integration based on a variable transformation. of a weight function w on R . On this assumption, the minimizationproblem becomes convex and we can show that it has a unique optimal solution and that it ischaracterized by a stationary condition. Moreover, we can compute it by a standard techniqueof convex optimization. In addition, we can deal with weight functions w that are not even on R , i.e., we can deal with a wider class of the spaces H ∞ ( D d , w ) than the previous study [17].The rest of this paper is organized as follows. Section 2 presents mathematical preliminaries,including some fundamental tools in potential theory. In Section 3, we analyze the discreteenergy minimization problem providing the sampling points for interpolation and lower boundfor the corresponding discrete potential. In Section 4, we propose an approximation formulaby using the sampling points and bound its error in each space H ∞ ( D d , w ). In Section 5, wepresent some results of numerical experiments. Finally, in Section 6, we conclude this work. Let d be a positive real number, and let D d be the strip region defined by D d := { z ∈ C || Im z | < d } . In order to specify the weight functions w on D d mathematically, we use thefunction space B ( D d ) of all functions ζ that are analytic on D d , such thatlim x →±∞ Z d − d | ζ ( x + i y ) | d y = 0 (2.1)and lim y → d − Z ∞−∞ ( | ζ ( x + i y ) | + | ζ ( x − i y ) | ) d x < ∞ . (2.2)Then, we regard w : D d → C as a weight function if w satisfies the following assumption. Assumption 1.
Function w belongs to B ( D d ), does not vanish at any point in D d , and takespositive real values in (0 ,
1] on the real axis.Furthermore, throughout this work, we assume the log-concavity of the weight function w . Assumption 2.
Function log w is strictly concave on R .For the weight function w that satisfies Assumptions 1 and 2, we define the weighted Hardyspace on D d by (1.1), i.e., H ∞ ( D d , w ) := { f : D d → C | f is analytic on D d and k f k < ∞ } , where k f k := sup z ∈D d (cid:12)(cid:12)(cid:12)(cid:12) f ( z ) w ( z ) (cid:12)(cid:12)(cid:12)(cid:12) . Simple log-concavity is assumed for w in [17, Assumption 2.3]. See [4, Chap. 10] as a reference for the Hardy spaces over general domains. .2 Optimal approximation We provide a mathematical formulation for optimality of the approximation formula in thespace H ∞ ( D d , w ), with the weight function w satisfying Assumptions 1 and 2. In this regard,for a given positive integer n , we first consider all the possible n -point interpolation formulason R that can be applied to any function f ∈ H ∞ ( D d , w ). Subsequently, we choose a criterionthat determines optimality of a formula in H ∞ ( D d , w ). Based on [12], we adopt the minimumworst error E min n ( H ∞ ( D d , w )) given by E min n ( H ∞ ( D d , w )):= inf ≤ l ≤ n inf m ,...,m l m + ··· + m l = n inf a j ∈D d distinct inf φ jk sup k f k≤ sup x ∈ R (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) f ( x ) − l X j =1 m j − X k =0 f ( k ) ( a j ) φ jk ( x ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , (2.3)where φ jk are analytic functions on D d . We regard a formula that attains this value as optimal.We can provide some characterizations of E min n ( H ∞ ( D d , w )). To achieve this, for a mutuallydistinct n -sequence a = { a j } nj =1 ⊂ R , we introduce the following functions T d ( x ) = tanh (cid:16) π d x (cid:17) , (2.4) B n ( x ; a, D d ) = n Y j =1 T d ( x ) − T d ( a j )1 − T d ( a j ) T d ( x ) , (2.5) B n ; k ( x ; a, D d ) = Y ≤ j ≤ n,j = k T d ( x ) − T d ( a j )1 − T d ( a j ) T d ( x ) , (2.6)and the n -point interpolation formula L n [ a ; f ]( x ) = n X k =1 f ( a k ) B n : k ( x ; a, D d ) w ( x ) B n : k ( a k ; a, D d ) w ( a k ) T ′ d ( x − a k ) T ′ d (0) . (2.7)Then, we describe characterizations of E min n ( H ∞ ( D d , w )), including the expression in (1.6), bythe following proposition. Proposition 2.1 ([12, Lemma 4.3 and its proof]) . Let a = { a j } nj =1 ⊂ R be a mutually distinctsequence. Then, we have the following error estimate of the formula in (2.7): E min n ( H ∞ ( D d , w )) ≤ sup f ∈ H ∞ ( D d ,w ) k f k≤ (cid:18) sup x ∈ R | f ( x ) − L n [ a ; f ]( x ) | (cid:19) (2.8) ≤ sup x ∈ R | B n ( x ; a, D d ) w ( x ) | . (2.9)Furthermore, if we take the infimum over all the n -sequences a in the above inequalities, theneach of them becomes an equality. E min n ( H ∞ ( D d , w )) = inf a j ∈ R sup f ∈ H ∞ ( D d ,w ) k f k≤ (cid:18) sup x ∈ R | f ( x ) − L n [ a ; f ]( x ) | (cid:19) (2.10) The function given by (2.5) is called the transformed Blaschke product. a j ∈ R (cid:20) sup x ∈ R | B n ( x ; a, D d ) w ( x ) | (cid:21) . (2.11)Proposition 2.1 indicates that the interpolation formula L n [ a ; f ]( x ) provides an explicit formof an optimal approximation formula if there exists an n -sequence a = a ∗ that attains theinfimum in (2.11). Since T d ( x ) − T d ( a j )1 − T d ( a j ) T d ( x ) = T d ( x − a j )for a j ∈ R and x ∈ R , the expression in (2.11) can be rewritten in the forminf a j ∈ R sup x ∈ R (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n Y j =1 T d ( x − a j ) w ( x ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . Therefore, as far as a = { a j } nj =1 is concerned, we can consider the following equivalent alter-native: inf a j ∈ R sup x ∈ R n X j =1 log | T d ( x − a j ) | + log w ( x ) . (2.12)To deal with the optimization problem corresponding to (2.12), we introduce the followingnotation: K ( x ) = − log | T d ( x ) | (cid:16) = − log (cid:12)(cid:12)(cid:12) tanh (cid:16) π d x (cid:17)(cid:12)(cid:12)(cid:12)(cid:17) , (2.13) Q ( x ) = − log w ( x ) . (2.14)Furthermore, for an integer n ≥
2, let R n = { ( a , . . . , a n ) ∈ R n | a < · · · < a n } (2.15)be the set of mutually distinct n -point configurations in R . Then, by using the function definedby U D n ( a ; x ) = n X i =1 K ( x − a i ) , x ∈ R (2.16)for a = ( a , . . . , a n ) ∈ R n , we can formulate the optimization problem corresponding to (2.12)as follows: (D) maximize inf x ∈ R (cid:0) U D n ( a ; x ) + Q ( x ) (cid:1) subject to a ∈ R n . (2.17)Problem (D) in (2.17) is closely related to potential theory. In fact, function K ( x − y ) of x, y ∈ R is the kernel function derived from the Green function of D d : g D d ( z , z ) = − log (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) T d ( z ) − T d ( z )1 − T d ( z ) T d ( z ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (2.18)in the special case that ( z , z ) = ( x, y ) ∈ R . Therefore, the function U D n ( a ; x ) is the Greenpotential for the discrete measure P ni =1 δ a i , where δ a i is the Dirac measure centered at a i .Because some fundamental results about the Green potential can be used as good referencesto deal with Problem (D), we describe them below in Section 2.3.5 .3 Fundamentals of potential theory For a positive integer n , let M ( R , n ) be the set of all Borel measures µ on R with µ ( R ) = n ,and let M c ( R , n ) be the set of measures µ ∈ M ( R , n ) with a compact support. In particular,for a sequence a ∈ R n , the discrete measure P ni =1 δ a i belongs to M c ( R , n ). For µ ∈ M ( R , n ),we define potential U C n ( µ ; x ) and energy I C n ( µ ) by U C n ( µ ; x ) = Z R K ( x − y ) d µ ( y ) , (2.19) I C n ( µ ) = Z R Z R K ( x − y ) d µ ( y )d µ ( x ) + 2 Z R Q ( x ) d µ ( x ) , (2.20)respectively. According to (2.13) and (2.18), these are the Green potential and energy in thecase that the domain of the Green function is D d and that of the external field Q is R . By usingstandard techniques in potential theory, we can show the following fundamental theorems. Theorem 2.2.
On Assumptions 1 and 2, the following hold true:1. The energy I C n ( µ ) has a unique minimizer µ ∗ n ∈ M ( R , n ) with I C n ( µ ∗ n ) < ∞ . Moreover, µ ∗ n has finite energy: Z R U C n ( µ ∗ n ; x ) d µ ∗ n ( x ) < ∞ .
2. The support supp µ ∗ n is the compact subset of R , i.e., µ ∗ n ∈ M c ( R , n ). More precisely,supp µ ∗ n ⊂ { x ∈ R | Q ( x ) ≤ N n } holds true for some N n .3. Let constant F C K,Q ( n ) be defined by F C K,Q ( n ) = I C n ( µ ∗ n ) − Z R Q ( x ) d µ ∗ n ( x ) . (2.21)Then, we have U C n ( µ ∗ n ; x ) + Q ( x ) ≥ F C K,Q ( n ) n ∀ x ∈ R , (2.22) U C n ( µ ∗ n ; x ) + Q ( x ) = F C K,Q ( n ) n ∀ x ∈ supp µ ∗ n . (2.23) Proof.
This theorem is a specialized version of Theorems 2.1 and 2.2 in [7]. In fact, if we set G = D d , E = R and Q ( x ) = − log w ( x ), then Q is admissible on R and the assumptionsof these theorems are satisfied. In particular, the assertion lim x →±∞ , x ∈ R Q ( x ) = ∞ holds trueowing to Assumption 1. Therefore, the proof of this theorem is straightforward and omittedhere. Theorem 2.3.
On Assumptions 1 and 2, for any µ ∈ M c ( R , n ), there exists x ∈ R such that U C n ( µ ; x ) + Q ( x ) ≤ F C K,Q ( n ) n . (2.24)6 roof. Because this theorem is an analogue of the first half of Theorem I.3.1 in [8], this proofis basically similar to that of the same theorem. Suppose that U C n ( µ ; x ) + Q ( x ) ≥ L for any x ∈ R holds for some L . Then, by Inequality (2.23) in Theorem 2.2, we have U C n ( µ ; x ) − U C n ( µ ∗ ; x ) ≥ L − F C K,Q ( n ) n ⇐⇒ U C n ( µ ; x ) ≥ U C n ( µ ∗ ; x ) + L − F C K,Q ( n ) n (2.25)for any x ∈ supp µ ∗ . Then, by the principle of domination, Equality (2.25) holds for all z ∈ D d .By letting z → z ∈ ∂ D d , we have F C K,Q ( n ) n ≥ L. Therefore, there exists x ∈ R such that U C n ( µ ; x ) + Q ( x ) ≤ F C K,Q ( n ) n , which proves the theorem.Then, according to Inequalities (2.22) and (2.24), we can obtain the following theorem. Theorem 2.4.
On Assumptions 1 and 2, the minimizer µ ∗ n of I C n yields a solution of theoptimization problem(C) maximize inf x ∈ R ( U C n ( µ ; x ) + Q ( x )) subject to µ ∈ M c ( R , n ) . (2.26) Proof.
This is a direct consequence of Theorems 2.2 and 2.3.
Our ideal goal is finding an optimal solution a † ∈ R n of Problem (D) defined in (2.17) andproposing an optimal interpolation formula L n [ a † ; f ]. However, it is difficult to solve Problem(D) directly. Therefore, with a view to a discrete analogue of Theorem 2.4, we define discreteenergy I D n ( a ) as I D n ( a ) = n X i =1 n X j =1 j = i K ( a i − a j ) + 2( n − n n X i =1 Q ( a i ) (3.1)for a = ( a , . . . , a n ) ∈ R n , and consider its minimization.In this section, we show that I D n is easily tractable owing to Assumptions 1 and 2, and thatits minimizer is an approximate solution of Problem (D). First, we confirm the basic propertiesof K and Q . Proposition 3.1.
The function K defined by (2.13) is positive, even, and convex as a functionon R \ { } . Furthermore, it satisfies lim x →± K ( x ) = ∞ .7 roposition 3.2. On Assumptions 1 and 2, the function Q defined by (2.14) is twice differ-entiable and strictly convex on R . Therefore, we have Q ′′ ( x ) > x ∈ R .Because these propositions can be easily proved, we omit their proofs. Next, we show thesolvability of the minimization of I D n . Theorem 3.3.
On Assumptions 1 and 2, the energy I D n is convex in R n , and there is a uniqueminimizer of I D n in R n . Proof.
Let H n ( a ) be the Hessian of I D n at a ∈ R n . First, we show that H n ( a ) is positive definitefor any a ∈ R n . Because we have ∂∂a ℓ I D n ( a ) = 2 n X j =1 j = ℓ K ′ ( a ℓ − a j ) + 2( n − n Q ′ ( a ℓ ) , (3.2)the ( k, ℓ )-component of H n ( a ) is given by ∂ ∂a k ∂a ℓ I D n ( a ) = P nj =1 j = ℓ K ′′ ( a ℓ − a j ) + n − n Q ′′ ( a ℓ ) ( k = ℓ ) − K ′′ ( a ℓ − a k ) ( k = ℓ ) . (3.3)Because K is convex and Q is strictly convex, the diagonal components of H n ( a ) are positive.Furthermore, H n ( a ) is strictly diagonally dominant because n X k =1 k = ℓ (cid:12)(cid:12) − K ′′ ( a ℓ − a k ) (cid:12)(cid:12) = 2 n X k =1 k = ℓ K ′′ ( a ℓ − a k ) < n X k =1 k = ℓ K ′′ ( a ℓ − a k ) + 2( n − n Q ′′ ( a ℓ ) . Therefore H n ( a ) is positive definite [6, Corollary 7.2.3], which implies I D n is a strictly convexfunction on R n .Next, we show the existence of a unique minimizer of I D n in R n . Because Q ( x ) → ∞ as x → ±∞ , there exists r n > | x | > r n ⇒ n − n Q ( x ) > I D n (1 , , . . . , n ) . Then, for ( a , . . . , a n ) ∈ R n with max {| a | , | a n |} > r n , we have I D n ( a , . . . , a n ) > n − n max { Q ( a ) , Q ( a n ) } > I D n (1 , , . . . , n ) . Therefore, it suffices to consider the minimization of I D n ( a ) in the bounded set ˜ R n = { ( a , . . . , a n ) ∈ R n | − r n ≤ a < · · · < a n ≤ r n } . This minimization is equivalent to the maximization ofexp( − I D n ( a )) on ˜ R n . Becauselim a i → a j ( a ,...,a n ) ∈ ˜ R n exp( − I D n ( a )) = 0 ( j = i − j = i + 1) , the function J D n ( a ) defined by J D n ( a ) = ( exp( − I D n ( a )) ( a ∈ ˜ R n ) , a ∈ cl( ˜ R n ) \ ˜ R n )8s a continuous function on cl( ˜ R n ), where “cl” denotes the closure of a set. Therefore, thereexists a maximizer of J D n ( a ) in cl( ˜ R n ). Actually any maximizer is in ˜ R n because the maximumvalue is positive. Hence the minimizer of I D n ( a ) exists in ˜ R n , which is unique because I D n ( a ) isstrictly convex.Let a ∗ = ( a ∗ , . . . , a ∗ n ) ∈ R n be the minimizer of I D n , and let F D K,Q ( n ) be the number definedby F D K,Q ( n ) = I D n ( a ∗ ) − n − n n X i =1 Q ( a ∗ i ) , (3.4)which is a discrete analogue of F C K,Q ( n ) in (2.21). Then, we can show a discrete analogue ofInequality (2.22), which indicates that a ∗ is an approximate solution of Problem (D). Theorem 3.4.
Let a ∗ ∈ R n be the minimizer of I D n . On Assumptions 1 and 2, we have U D n ( a ∗ ; x ) + Q ( x ) ≥ F D K,Q ( n ) n − x ∈ R . (3.5) Proof.
First, we show that n X j =1 j = k K ( x − a ∗ j ) + n − n Q ( x ) ≥ n X j =1 j = k K ( a ∗ k − a ∗ j ) + n − n Q ( a ∗ k ) (3.6)for any x ∈ R and k = 1 , . . . , n . Suppose that Inequality (3.6) does not hold for some x and k : n X j =1 j = k K ( x − a ∗ j ) + n − n Q ( x ) < n X j =1 j = k K ( a ∗ k − a ∗ j ) + n − n Q ( a ∗ k ) . (3.7)Then, by multiplying both sides of (3.7) by 2 and adding n X i =1 i = k n X j =1 j = i,k K ( a ∗ i − a ∗ j ) + 2( n − n n X i =1 i = k Q ( a ∗ i )to them, we have I D n ( b ) = 2 n X j =1 j = k K ( x − a ∗ j ) + n X i =1 i = k n X j =1 j = i,k K ( a ∗ i − a ∗ j ) + 2( n − n Q ( x ) + n X i =1 i = k Q ( a ∗ i ) < I D n ( a ∗ ) , where b = ( b , . . . , b n ) ∈ R n is the n point configuration obtained by sorting ( a ∗ , . . . , a ∗ k − , x, a ∗ k +1 , . . . , a ∗ n ).Thus we have Inequality (3.6) by contradiction.Then, summing up both sides of Inequality (3.6) for k = 1 , . . . , n , we have n X k =1 n X j =1 K ( x − a ∗ j ) − K ( x − a ∗ k ) + ( n − Q ( x ) ≥ I D n ( a ∗ ) − n − n n X i =1 Q ( a ∗ i )9 ⇒ ( n − n X j =1 K ( x − a ∗ j ) + Q ( x ) ≥ F D K,Q ( n ) , which is equivalent to Inequality (3.5).Let P n be the optimal value of Problem (D) in (2.17): P n = sup a i ∈ R (cid:18) inf x ∈ R (cid:0) U D n ( a ; x ) + Q ( x ) (cid:1)(cid:19) . (3.8)Then, by using Theorems 2.3 and 3.4, we can obtain lower and upper bounds of P n . Theorem 3.5.
On Assumptions 1 and 2, we have F D K,Q ( n ) n ≤ P n ≤ F C K,Q ( n ) n , (3.9)which implies that the minimizer a ∗ ∈ R n of I D n is an approximate solution of Problem (D)in (2.17), whose approximation rate is bounded by F D K,Q ( n ) /F C K,Q ( n ). Proof.
By Theorem 3.4, we can obtain the lower bound as follows: P n ≥ inf x ∈ R ( U D n ( a ∗ ; x ) + Q ( x )) ≥ F D K,Q ( n ) n . (3.10)On the other hand, we can provide the upper bound by Theorem 2.3. In fact, for any a ∈ R n , there exists x ∈ R such that U D n ( a ; x ) + Q ( x ) ≤ F C K,Q ( n ) n (3.11)because we can consider the special case µ = P ni =1 δ a i ∈ M c ( R , n ) in Theorem 2.3. Then, wehave P n ≤ F C K,Q ( n ) n . (3.12) By using the minimizer a ∗ ∈ R n of I D n , we propose the approximation formula L n [ a ∗ ; f ] for f ∈ H ∞ ( D d , w ), where L n [ a ; f ] is defined by (2.7). That is, L n [ a ∗ ; f ] is written in the form L n [ a ∗ ; f ]( x ) = n X k =1 f ( a ∗ k ) B n : k ( x ; a ∗ , D d ) w ( x ) B n : k ( a ∗ k ; a ∗ , D d ) w ( a ∗ k ) T ′ d ( x − a ∗ k ) T ′ d (0) . (4.1)We can provide an error estimate of this formula.10 heorem 4.1. Let a ∗ ∈ R n be the minimizer of the discrete energy I D n and let L n [ a ∗ ; f ] bethe approximation formula for f ∈ H ∞ ( D d , w ) given by (4.1). On Assumptions 1 and 2, wehave sup f ∈ H ∞ ( D d ,w ) k f k≤ (cid:18) sup x ∈ R | f ( x ) − L n [ a ∗ ; f ]( x ) | (cid:19) ≤ exp − F D K,Q ( n ) n ! . (4.2) Proof.
From Inequality (2.9) in Proposition 2.1 and Theorem 3.4, we havesup f ∈ H ∞ ( D d ,w ) k f k≤ (cid:18) sup x ∈ R | f ( x ) − L n [ a ∗ ; f ]( x ) | (cid:19) ≤ sup x ∈ R | B n ( x ; a ∗ , D d ) w ( x ) | . = sup x ∈ R exp (cid:0) − U D n ( a ∗ ; x ) − Q ( x ) (cid:1) ≤ exp − F D K,Q ( n ) n ! . Remark 4.1.
From the inequalities in Theorem 3.5, we haveexp − F C K,Q ( n ) n ! ≤ E min n ( H ∞ ( D d , w )) ≤ exp − F D K,Q ( n ) n ! . (4.3)Therefore, we can regard the proposed formula in (4.1) as nearly optimal if the exponents F C K,Q ( n ) /n and F D K,Q ( n ) /n are sufficiently close. However, we have not found their exactorders. Their precise estimates will be considered in future work. As a preliminary attemptfor the estimate, we provide an upper bound of the difference F C K,Q ( n ) − F D K,Q ( n ) by using theseparation distance h a ∗ given by (A.3) in Appendix A. We can obtain some alternative forms of Formula (4.1) to reduce its computational cost. Assuch alternatives, we derive analogues of the barycentric formulas for the Lagrange interpola-tion [1, 21]. Because they are categorized into two types, we derive two analogues. As shownbelow, the second one is derived only approximately.We begin with the first type. By letting λ ∗ k = 1 B n : k ( a ∗ k ; a ∗ , D d ) = 1 Q j = k T d ( a ∗ k − a ∗ j ) ( k = 1 , . . . , n ) , (4.4)we have L n [ a ∗ ; f ]( x ) = w ( x ) n X k =1 f ( a ∗ k ) w ( a ∗ k ) λ ∗ k T ′ d (0) B n : k ( x ; a ∗ , D d ) T ′ d ( x − a ∗ k )= w ( x ) n X k =1 f ( a ∗ k ) w ( a ∗ k ) λ ∗ k T ′ d (0) n Y j =1 T d ( x − a ∗ j ) T ′ d ( x − a ∗ k ) T d ( x − a ∗ k )11 w ( x ) B n ( x ; a ∗ , D d ) n X k =1 f ( a ∗ k ) w ( a ∗ k ) λ ∗ k T ′ d (0) T ′ d ( x − a ∗ k ) T d ( x − a ∗ k )= w ( x ) B n ( x ; a ∗ , D d ) n X k =1 λ ∗ k S d ( x − a ∗ k ) f ( a ∗ k ) w ( a ∗ k ) , (4.5)where S d ( x ) = T ′ d (0) T d ( x ) T ′ d ( x ) = 12 sinh (cid:16) π d x (cid:17) . (4.6)Then, we regard Formula (4.5) as the analogue of the first barycentric formula.Next, we consider the second type. By letting f = w in (4.5), we have L n [ a ∗ ; w ]( x ) = w ( x ) B n ( x ; a ∗ , D d ) n X k =1 λ ∗ k S d ( x − a ∗ k ) . Then, by noting that L n [ a ∗ ; w ]( x ) is an approximation of w ( x ), we have w ( x ) B n ( x ; a ∗ , D d ) = L n [ a ∗ ; w ]( x ) , n X k =1 λ ∗ k S d ( x − a ∗ k ) ≈ w ( x ) , n X k =1 λ ∗ k S d ( x − a ∗ k ) . (4.7)Therefore, by replacing the factor w ( x ) B n ( x ; a ∗ , D d ) in Formula (4.5) with the RHS of (4.7),we obtain the approximate form of the formula as follows: L n [ a ∗ ; f ]( x ) ≈ w ( x ) n X k =1 λ ∗ k S d ( x − a ∗ k ) f ( a ∗ k ) w ( a ∗ k ) , n X j =1 λ ∗ j S d ( x − a ∗ j ) . (4.8)We denote the RHS of (4.8) by ˜ L n [ a ∗ ; f ]( x ). Then, we regard this formula as the analogue ofthe second barycentric formula. We compute a numerical approximation of the minimizer a ∗ of I D n by Newton’s method shownin Figure 1. Recall that H n ( a ) is the Hessian of I D n at a . Let ˜ a ∗ = (˜ a ∗ , . . . , ˜ a ∗ n ) ∈ R n denote theoutput of this algorithm. Then, in order to approximate f ( x ), we use the barycentric formulasin (4.5) and (4.8). Recall that their explicit forms are given by (I) L n [˜ a ∗ ; f ]( x ) = w ( x ) n Y j =1 tanh (cid:16) π d ( x − ˜ a ∗ j ) (cid:17) n X k =1 λ ∗ k sinh (cid:0) π d ( x − ˜ a ∗ k ) (cid:1) f (˜ a ∗ k ) w (˜ a ∗ k ) , (5.1) (II) ˜ L n [˜ a ∗ ; f ]( x ) = w ( x ) n X k =1 λ ∗ k sinh (cid:0) π d ( x − ˜ a ∗ k ) (cid:1) f (˜ a ∗ k ) w (˜ a ∗ k ) , n X j =1 λ ∗ j sinh (cid:16) π d ( x − ˜ a ∗ j ) (cid:17) , (5.2)where ˜ λ ∗ k = Y j = k (cid:16) π d (˜ a ∗ k − ˜ a ∗ j ) (cid:17) ( k = 1 , . . . , n ) . (5.3)12e used Matlab R2016b programs for the computations presented in this section. Thesampling points and approximations were computed by the programs with the double precisionfloating point numbers and multi-precision numbers with 75 digits, respectively. For the multi-precision numbers, we used the Multiprecision Computing Toolbox for Matlab, produced byAdvanpix ( , last accessed on December 18, 2017). These programsused for the computations are available on the web page [15].Initialize a ∈ R n do δ := − ( H n ( a )) − ∇ I D n ( a ) a := a + δǫ := max {| δ | , . . . , | δ n |} while ǫ ≥ − Output a Figure 1: Newton’s method for finding the minimizer of I D n We begin with the comparison of Formula (I) with the previous formula in [17]. Because theirdifference is the method for generating the sampling points, we compare the accuracy of theformulas L n [˜ a ∗ ; f ] and L n [˜ a old ; f ], where ˜ a old = { ˜ a old j } is the sampling points generated by themethod in [17]. To this end, we choose the functions and weights listed in Table 1, which arethe same as those used in [17]. Each weight w i satisfies Assumptions 1 and 2 for d = π/ − ε with 0 < ε ≪
1, and each function f i satisfies f i ∈ H ∞ ( D π/ − ε , w i ) for the correspondingweight w i . In the following, we adopt ε = 10 − .For computing the errors of the formulas, we choose 1001 evaluation points x ℓ ⊂ R andadopt the value max ℓ | f ( x ℓ ) − (the value of the approximant at x ℓ ) | as the error. First we finda value of x ≤ w i ( x ) ≤ − ( i = 1) , − ( i = 2) , − ( i = 3) , and then determine the points x ℓ by x = − x and x ℓ = x + x − x ℓ −
1) ( ℓ = 2 , . . . , . (5.4)We adopt x = − , −
10 and − f , f and f , respectively.First, we present the computed sampling points ˜ a ∗ and functions U D (˜ a ∗ ; x ) + Q ( x ) inFigures 2–4. Next, we present the computed errors in Figures 5–7. From these results, we canobserve that the computed sampling points are very close to those by the previous methodand the proposed formulas are competitive with the previous formulas.13 emark 5.1. The functions f and f in Table 1 can be derived from the function g ( t ) = 1 √ t (5.5)by the variable transformations t = sinh(2 x ) and t = sinh (cid:16) π x ) (cid:17) , respectively. Therefore, the approximations of f and f by the proposed method can beregarded as the approximations of g in (5.5) on the entire real line R via these transformations.Thus, the proposed method can provide formulas for approximating functions such as g withalgebraic decay on R , which are discussed in [2, § f i ∈ H ∞ ( D π/ − ε , w i ) w i f ( x ) = sech(2 x ) w ( x ) = sech(2 x )2. f ( x ) = x ( π/ + x e − x w ( x ) = e − x f ( x ) = sech(( π/
2) sinh(2 x )) w ( x ) = sech(( π/
2) sinh(2 x )) Next, we compare Formulas (I), (II) and the sinc interpolation with a transformation. Weconsider the function g given by g ( t ) = p − t (1 + t ) (5.6)and its approximation for t ∈ ( − , g has the singularities at the endpoints t = ±
1. In order to mitigate the difficulty in approximating g at their neighborhoods, weemploy the variable transformations given by ψ and ψ in (1.4) and (1.5), respectively. Then,we consider the approximations of the transformed functions f ( x ) = g ( ψ ( x )) = w ( x ) (cid:18) (cid:16) x (cid:17) (cid:19) , (5.7) f ( x ) = g ( ψ ( x )) = w ( x ) (cid:18) (cid:16) π x (cid:17) (cid:19) (5.8)for x ∈ R , where w ( x ) = sech (cid:16) x (cid:17) , (5.9) w ( x ) = sech (cid:16) π x (cid:17) . (5.10)By letting d = π − ε, d = π − ε (5.11)14 ocations of sampling points -10 -5 0 5 10 I nd i ce s Sampling points for weight 1 (n = 101) (a) Sampling points { ˜ a ∗ j } x -15 -10 -5 0 5 10 15 F un c t i o n va l u e s Function U nD (x) + Q(x) for weight 1 (n = 101) Function F K,QD (n)/n (b) Function U D n (˜ a ∗ ; x ) + Q ( x ) x -15 -10 -5 0 5 10 15 F un c t i o n va l u e s "Old" function U nD (x)+Q(x) for the weight 1 (n = 101) "Old" function F K,QD (n)/n (c) Function U D n (˜ b ∗ ; x ) + Q ( x ) for thesampling points { ˜ b ∗ j } by the method in [17]Figure 2: Results for the sampling points for weight 1 ( w ) in Table 1 and n = 101.with 0 < ε ≪
1, for i = 4 ,
5, we can confirm that the weight function w i satisfies Assumptions 1and 2 for d = d i . Furthermore, the assertion f i ∈ H ∞ ( D d i , w i ) holds true for i = 4 ,
5. In thefollowing, we adopt ε = 10 − .For the functions f and f , we compare Formulas (I), (II), and the sinc interpolationformula (1.3): f ( x ) ≈ N + X k = − N − f ( kh ) sinc( x/h − k ) . (5.12)We need to determine the parameters N ± and h in this formula. Since the weights w i areeven, we consider odd n as the numbers of the sampling points and set N − = N + = ( n − / h > − πd i /h )) ( h →
0) and the15 ocations of sampling points -5 0 5 I nd i ce s Sampling points for weight 2 (n = 101) (a) Sampling points { ˜ a ∗ j } x -8 -6 -4 -2 0 2 4 6 8 F un c t i o n va l u e s Function U nD (x) + Q(x) for weight 2 (n = 101) Function F K,QD (n)/n (b) Function U D n (˜ a ∗ ; x ) + Q ( x ). x -8 -6 -4 -2 0 2 4 6 8 F un c t i o n va l u e s "Old" function U nD (x)+Q(x) for the weight 2 (n = 101) "Old" function F K,QD (n)/n (c) Function U D n (˜ b ∗ ; x ) + Q ( x ) for thesampling points { ˜ b ∗ j } by the method in [17]Figure 3: Results for the sampling points for weight 2 ( w ) in Table 1 and n = 101.latter is estimated depending on the weight w i as follows: X | k | > ( n − / | f i ( kh ) | = ( O(exp( − nh/ i = 4) , O(exp( − ( π/
4) exp( nh/ i = 5) , ( n → ∞ ) . Then, we set h = ( (4 π ( π − ε ) /n ) / ( i = 4) , n − log(( π − ε ) n ) ( i = 5) . We choose the evaluation points x ℓ in a similar manner to that of Section 5.1. We find avalue of x ≤ w i ( x ) ≤ ( − ( i = 4) , − ( i = 5) , ocations of sampling points -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 I nd i ce s Sampling points for weight 3 (n = 101) (a) Sampling points { ˜ a ∗ j } x -3 -2 -1 0 1 2 3 F un c t i o n va l u e s Function U nD (x) + Q(x) for weight 3 (n = 101) Function F K,QD (n)/n (b) Function U D n (˜ a ∗ ; x ) + Q ( x ) x -3 -2 -1 0 1 2 3 F un c t i o n va l u e s "Old" function U nD (x)+Q(x) for the weight 3 (n = 101) "Old" function F K,QD (n)/n (c) Function U D n (˜ b ∗ ; x ) + Q ( x ) for thesampling points { ˜ b ∗ j } by the method in [17]Figure 4: Results for the sampling points for weight 3 ( w ) in Table 1 and n = 101.and determine the points x ℓ by x = − x and (5.4). We adopt x = −
100 and − f and f , respectively.We show the errors of Formulas (I), (II) and sinc formula for f and f in Figures 8 and 9,respectively. We can observe that Formulas (I) and (II) have approximately the same accuracyand they outperform the sinc formula for each case. Finally, we approximate functions with uneven weights, to which the method in [17] cannot beapplied. To this end, we consider the function g given by g ( t ) = (1 − t ) / (1 + t ) / (1 + t ) (5.13)17 l og ( M ax err o r ) -14-12-10-8-6-4-2 Errors for function 1 "Old" formula (I)Proposed formula (I)
Figure 5: Errors for function 1 ( f ) in Table 1. “Old” formula refers to that in [17]. n l og ( M ax err o r ) -25-20-15-10-50 Errors for function 2 "Old" formula (I)Proposed formula (I)
Figure 6: Errors for function 2 ( f ) in Table 1.“Old” formula refers to that in [17]. n l og ( M ax err o r ) -70-60-50-40-30-20-100 Errors for function 3 "Old" formula (I)Proposed formula (I)
Figure 7: Errors for function 3 ( f ) in Table 1. “Old” formula refers to that in [17].and its approximation for t ∈ ( − , f ( x ) = g ( ψ ( x )) = w ( x ) · (cid:18) (cid:16) x (cid:17) (cid:19) , (5.14) f ( x ) = g ( ψ ( x )) = w ( x ) · (cid:18) (cid:16) π x (cid:17) (cid:19) (5.15)for x ∈ R , where w ( x ) = 1(1 + e x ) / (1 + e − x ) / , (5.16) w ( x ) = 1(1 + e π sinh x ) / (1 + e − π sinh x ) / . (5.17)18 l og ( M ax err o r ) -15-10-50 Errors for function 4
Proposed formula (I)Proposed formula (II)Sinc formula
Figure 8: Errors for function 4 ( f ). n l og ( M ax err o r ) -50-45-40-35-30-25-20-15-10-50 Errors for function 5
Proposed formula (I)Proposed formula (II)Sinc formula
Figure 9: Errors for function 5 ( f ).By letting d = π − ε, d = π − ε (5.18)with 0 < ε ≪
1, for i = 6 ,
7, we can confirm that the weight function w i satisfies Assumptions 1and 2 for d = d i . Furthermore, the assertion f i ∈ H ∞ ( D d i , w i ) holds true for i = 6 ,
7. In thefollowing, we adopt ε = 10 − .For the functions f and f , we also compare Formulas (I), (II) and the sinc interpolationformula in (5.12). In these cases, we need to take the unevenness of the weights into accountin determining the parameters N ± and h in (5.12). Since w ( x ) = ( O(e (3 / x ) ( x → −∞ ) , O(e − (1 / x ) ( x → + ∞ ) ,w ( x ) = ( O(exp( − (3 π/
4) e − x )) ( x → −∞ ) , O(exp( − ( π/
2) e x )) ( x → + ∞ ) , we adopt h = r πd n , N − = (cid:22) n (cid:23) , N + = − N − + n − w ,h = 2 n log d n p / , N − = (cid:22) n − h log 32 (cid:23) , N + = − N − + n − w . We choose the evaluation points x ℓ in a similar manner to that of Section 5.1. We findvalues of x ≤ x ≥ w i ( x ) , w i ( x ) ≤ ( − ( i = 6) , − ( i = 7) , and determine the points x ℓ by (5.4). We adopt ( x , x ) = ( − , − . , .
5) forthe computations of f and f , respectively. 19e show the sampling points and functions U D n (˜ a ∗ ; x ) + Q ( x ) for these cases in Figures 10–13. Furthermore, we show the errors of Formulas (I), (II) and the sinc formula for f and f inFigures 14 and 15, respectively. We can observe that Formulas (I) and (II) have approximatelythe same accuracy and they outperform the sinc formula for each case. Locations of sampling points -20 -10 0 10 20 30 40 50 I nd i ce s Sampling points for weight 6 (n = 101)
Figure 10: Sampling points for d and w ( n = 101). x -40 -20 0 20 40 60 80 F un c t i o n va l u e s Function U nD (x) + Q(x) for weight 6 (n = 101) Function F K,QD (n)/n
Figure 11: Function U D n (˜ a ; x ) + Q ( x ) for d and w ( n = 101). Locations of sampling points -3 -2 -1 0 1 2 3 4 I nd i ce s Sampling points for weight 7 (n = 101)
Figure 12: Sampling points for d and w ( n = 101). x -6 -4 -2 0 2 4 6 F un c t i o n va l u e s Function U nD (x) + Q(x) for weight 7 (n = 101) Function F K,QD (n)/n
Figure 13: Function U D n (˜ a ; x ) + Q ( x ) for d and w ( n = 101). In this paper, we proposed the method for designing accurate approximation formulas forfunctions in the spaces H ∞ ( D d , w ) by minimizing the discrete energy I D n in (3.1) on R n ⊂ R n .On Assumptions 1 and 2, we proved that I D n is strictly convex on R n , and hence we showed thatwe can obtain the optimal solution a ∗ ∈ R n approximately by the standard technique in convexoptimization. Then, by using a ∗ as a set of sampling points, we designed the approximation20 l og ( M ax err o r ) -16-14-12-10-8-6-4-20 Errors for function 6
Proposed formula (I)Proposed formula (II)Sinc formula
Figure 14: Errors for function 6 ( f ). n l og ( M ax err o r ) -60-50-40-30-20-100 Errors for function 7
Proposed formula (I)Proposed formula (II)Sinc formula
Figure 15: Errors for function 7 ( f ).formula L n [ a ∗ ; f ] in (4.1) and gave the upper bound of its error for each space H ∞ ( D d , w ).By the numerical experiments, we showed the formula is accurate.Major themes for future work are finding the precise orders of the errors of the proposedformulas with respect to n and investigating whether the proposed formulas are asymptoticallyoptimal or not. In addition, other themes will be their applications to various numericalmethods such as the numerical integration and solving differential/integral equations. In fact,in the paper [18], we have considered the application of the previous formulas in [17] tonumerical integration. Funding
K. Tanaka is supported by the grant-in-aid of Japan Society of the Promotion of Science withKAKENHI Grant Number 17K14241.
Acknowledgement
The authors would like to give thanks to Dr. Kuan Xu for his valuable comments aboutRemark 5.1.
References [1] J.-P. Berrut and L. N. Trefethen, Barycentric Lagrange interpolation. SIAM Rev. (2004), pp. 501–517.[2] J. P. Boyd, Chebyshev and Fourier spectral methods 2nd ed., Dover, New York, 2001.[3] J. S. Brauchart and P. J. Grabner, Distributing many points on spheres: Minimal energyand designs, J. Complexity (2015), pp. 293–326.[4] P. L. Duren, Theory of H p spaces, Academic Press, London, 1970.215] S. Haber, The tanh rule for numerical integration, SIAM J. Numer. Anal. , pp. 668–685.[6] R. A. Horn and C. R. Johnson, Matrix analysis, Cambridge University Press, 1990.[7] A. L. Levin and D. S. Lubinsky, Green equilibrium measures and representations of anexternal field, J. Approx. Th. (2001), pp. 298–323.[8] E. B. Saff and V. Totik, Logarithmic potentials with external fields, Springer, BerlinHeidelberg, 1997.[9] C. Schwartz, Numerical integration of analytic functions, J. Comput. Phys. (1969),pp. 19–29.[10] F. Stenger, Numerical methods based on sinc and analytic functions, Springer, New York,1993.[11] F. Stenger, Handbook of sinc numerical methods, CRC Press, Boca Raton, 2011.[12] M. Sugihara, Near optimality of the sinc approximation, Math. Comp. (2003), pp. 767–786.[13] H. Takahasi and M. Mori, Quadrature formulas obtained by variable transformation,Numer. Math. (1973), pp. 206–219.[14] H. Takahasi and M. Mori, Double exponential formulas for numerical integration, Publ.RIMS Kyoto Univ. (1974), pp. 721–741.[15] K. Tanaka, Matlab programs for design of approximation formulas by discrete energy min-imization. https://github.com/KeTanakaN/mat_disc_ener_opt_approx (last accessedon January 6, 2018)[16] K. Tanaka, T. Okayama, and M. Sugihara, An optimal approximation formula for func-tions with singularities, arXiv:1610.06844 (2016).[17] K. Tanaka, T. Okayama, and M. Sugihara, Potential theoretic approach to design ofaccurate formulas for function approximation in symmetric weighted Hardy spaces, IMAJournal of Numerical Analysis (2017), pp. 861–904 (doi:10.1093/imanum/drw022).[18] K. Tanaka, T. Okayama, and M. Sugihara, Potential theoretic approach to design ofaccurate numerical integration formulas in weighted Hardy spaces, in: G. E. Fasshauerand L. L. Schumaker (eds.), Approximation Theory XV: San Antonio 2016, SpringerProceedings in Mathematics & Statistics 201 (2017), pp. 335–360 (doi:10.1007/978-3-319-59912-0 17).[19] K. Tanaka, M. Sugihara, and K. Murota, Function Classes for Successful DE-Sinc Ap-proximations. Math. Comp. (2009), pp. 1553–1571.[20] K. Tanaka, M. Sugihara, K. Murota, and M. Mori, Function classes for double exponentialintegration formulas, Numer. Math. (2009), pp. 631–655.[21] L. N. Trefethen, Approximation theory and approximation practice. SIAM, Philadelphia,2013. 22 Estimate of the difference F C K,Q ( n ) − F D K,Q ( n ) We provide an upper bound of the difference F C K,Q ( n ) − F D K,Q ( n ). More precisely, on theassumption that max ≤ i ≤ n − | a ∗ i +1 − a ∗ i | ≤ a ∗ ∈ R n of I D n , we show that F C K,Q ( n ) − F D K,Q ( n ) ≤ − (3 n + 1) log h a ∗ + C n + e ∗ n ( Q ) , (A.2)where h a ∗ = min ≤ i ≤ n − (cid:12)(cid:12) a ∗ i +1 − a ∗ i (cid:12)(cid:12) (A.3)is the separation distance of a ∗ , the value C n is independent of a ∗ with C n = O( n ) ( n → ∞ ),and e ∗ n ( Q ) is a sum of the differences of some integrations of Q given by a ∗ . A concreteexpression of the upper bound is given by Proposition A.4 below. We prove it after showingseveral lemmas.The first lemma shows that I C n ( µ ∗ n ) is monotonically increasing with respect to n . Lemma A.1.
For an integer n ≥
1, the inequality I C n ( µ ∗ n ) ≤ I C n +1 ( µ ∗ n +1 ) holds true. Proof.
Let η ∗ i be the copy of µ ∗ n +1 / ( n + 1) for i = 1 , . . . , n + 1. Then, for k = 1 , . . . , n + 1, wehave I C n +1 ( µ ∗ n +1 ) = I C n +1 n +1 X i =1 η ∗ i ! = n +1 X i =1 n +1 X j =1 Z R Z R K ( x − y ) d η ∗ i ( x )d η ∗ j ( y ) + 2 n +1 X i =1 Z R Q ( x ) d η ∗ i ( x ) ≥ n +1 X i =1 i = k n +1 X j =1 j = k Z R Z R K ( x − y ) d η ∗ i ( x )d η ∗ j ( y ) + 2 n +1 X i =1 i = k Z R Q ( x ) d η ∗ i ( x )+ n +1 X i =1 Z R Z R K ( x − y ) d η ∗ i ( x )d η ∗ k ( y ) + 2 Z R Q ( x ) d η ∗ k ( x )= I C n n +1 X i =1 i = k η ∗ i + n +1 X i =1 Z R Z R K ( x − y ) d η ∗ i ( x )d η ∗ k ( y ) + 2 Z R Q ( x ) d η ∗ k ( x ) ≥ I C n ( µ ∗ n ) + n +1 X i =1 Z R Z R K ( x − y ) d η ∗ i ( x )d η ∗ k ( y ) + 2 Z R Q ( x ) d η ∗ k ( x ) . Then, by summing up both sides of this inequality for k = 1 , . . . , n + 1, we have( n + 1) I C n +1 ( µ ∗ n +1 ) ≥ ( n + 1) I C n ( µ ∗ n ) + I C n +1 ( µ ∗ n +1 ) ⇐⇒ I C n +1 ( µ ∗ n +1 ) n + 1 ≥ I C n ( µ ∗ n ) n . Hence I C n ( µ ∗ n ) /n is monotonically increasing, and so is I C n ( µ ∗ n ).23s an approximation of the measure P ni =1 δ a i ∈ M c ( R , n ) for a = ( a , . . . , a n ) ∈ R n , weconsider the Borel measure ν ˆ a ∈ M c ( R , n + 1) defined by ν ˆ a ( Z ) = n X i =0 a i +1 − a i Z Z ∩ [ a i ,a i +1 ] d y for ˆ a = ( a , a , . . . , a n , a n +1 ) ∈ R n +2 . Then, we define S i (ˆ a ) = Z a i +1 a i − K ( a i − y ) d ν ˆ a ( y ) , (A.4) T i (ˆ a ) = Z a i +1 a i Z a i +1 a i K ( x − y ) d ν ˆ a ( y )d ν ˆ a ( x ) , (A.5) e (1) n ( Q ; ˆ a ) = Z a n +1 a Q ( x ) d ν ˆ a ( x ) − n − n n X i =1 Q ( a i ) , (A.6) e (2) n ( Q ; ˆ a ) = Z a n +1 a Q ( x ) d ν ˆ a ( x ) − Z R Q ( x ) d µ ∗ n ( x ) (A.7)for ˆ a . By using these expressions, we can give a preliminary upper bound of F C K,Q ( n ) − F D K,Q ( n )as shown in the following lemma. Lemma A.2.
Let a ∗ = ( a ∗ , . . . , a ∗ n ) ∈ R n be the minimizer of I D n , and choose a ∗ and a ∗ n +1 such that ˆ a ∗ = ( a ∗ , a ∗ , . . . , a ∗ n , a ∗ n +1 ) ∈ R n +2 . Then, we have F C K,Q ( n ) − F D K,Q ( n ) ≤ n X i =1 S i (ˆ a ∗ ) + n X i =0 T i (ˆ a ∗ ) + e (1) n ( Q ; ˆ a ∗ ) + e (2) n ( Q ; ˆ a ∗ ) . (A.8) Proof.
We consider a general element ˆ a = ( a , a , . . . , a n , a n +1 ) ∈ R n +2 . By noting the con-vexity of K , for x ∈ ( a i , a i +1 ) we have (cid:18)Z a n +1 a − Z a i +1 a i (cid:19) K ( x − y ) d ν ˆ a ( y ) ≤ n X j =1 K ( x − a j ) ⇐⇒ Z a n +1 a K ( x − y ) d ν ˆ a ( y ) ≤ n X j =1 K ( x − a j ) + Z a i +1 a i K ( x − y ) d ν ˆ a ( y ) . (A.9)In a similar manner, we have Z a n +1 a K ( a i − y ) d ν ˆ a ( y ) ≤ n X j =1 j = i K ( a i − a j ) + Z a i +1 a i − K ( a i − y ) d ν ˆ a ( y ) . (A.10)Then, by using Lemma A.1, the fact that ν ˆ a ∈ M c ( R , n + 1), and Inequality (A.9), we canbound the optimal value I C n ( µ ∗ n ) from above as follows: I C n ( µ ∗ n ) ≤ I C n +1 ( µ ∗ n +1 ) ≤ I C n +1 ( ν ˆ a )= Z a n +1 a Z a n +1 a K ( x − y ) d ν ˆ a ( y )d ν ˆ a ( x ) + 2 Z a n +1 a Q ( x ) d ν ˆ a ( x )24 n X i =1 R i (ˆ a ) + n X i =0 T i (ˆ a ) + 2 Z a n +1 a Q ( x ) d ν ˆ a ( x ) , where R i (ˆ a ) = Z a n +1 a K ( x − a i ) d ν ˆ a ( x ) . Furthermore, by using Inequality (A.10), we have R i (ˆ a ) ≤ n X j =1 j = i K ( a i − a j ) + Z a i +1 a i − K ( a i − y ) d ν a ( y ) . Therefore, for a = ( a , . . . , a n ) ∈ R n we have I C n ( µ ∗ n ) ≤ n X i =1 n X j =1 j = i K ( a i − a j ) + n X i =1 S i (ˆ a ) + n X i =0 T i (ˆ a ) + 2 Z a n +1 a Q ( x ) d ν ˆ a ( x ) ≤ I D n ( a ) + n X i =1 S i (ˆ a ) + n X i =0 T i (ˆ a ) + 2 e (1) n ( Q ; ˆ a ) . Finally, by letting a = a ∗ and choosing a ∗ and a ∗ n +1 such that ˆ a ∗ = ( a ∗ , a ∗ , . . . , a ∗ n , a ∗ n +1 ) ∈R n +2 , we have F C K,Q ( n ) ≤ I D n ( a ∗ ) + n X i =1 S i (ˆ a ∗ ) + n X i =0 T i (ˆ a ∗ ) + 2 e (1) n ( Q ; ˆ a ∗ ) − Z R Q ( x ) d µ ∗ n ( x )= F D K,Q ( n ) + n X i =1 S i (ˆ a ∗ ) + n X i =0 T i (ˆ a ∗ ) + e (1) n ( Q ; ˆ a ∗ ) + e (2) n ( Q ; ˆ a ∗ ) , which is Inequality (A.8).In order to bound S i (ˆ a ∗ ) and T i (ˆ a ∗ ) from above, we use the fact that the function K givenby (2.13) satisfies K ( x ) ≤ − log (cid:12)(cid:12)(cid:12)(cid:16) tanh π d (cid:17) x (cid:12)(cid:12)(cid:12) ≤ − log | x | + c d for x with | x | ≤ , (A.11)where c d ≥ c d = − log (cid:16) tanh π d (cid:17) . (A.12) Lemma A.3.
Let a ∗ = ( a ∗ , . . . , a ∗ n ) ∈ R n be the minimizer of I D n satisfying (A.1), and let a ∗ and a ∗ n +1 be chosen such that ˆ a ∗ = ( a ∗ , a ∗ , . . . , a ∗ n , a ∗ n +1 ) ∈ R n +2 and h a ∗ ≤ | a ∗ j +1 − a ∗ j | ≤ j = 0 , n , where h a ∗ is the separation distance given by (A.3). Then, for i = 1 , . . . , n , we have S i (ˆ a ∗ ) ≤ − h a ∗ + 2(1 + c d ) , (A.13) T i (ˆ a ∗ ) ≤ − log h a ∗ + 12 + c d . (A.14)25 roof. We begin with S i (ˆ a ∗ ). By using (A.11), we have S i (ˆ a ∗ ) ≤ − a ∗ i − a ∗ i − Z a ∗ i a ∗ i − log | a ∗ i − y | dy − a ∗ i +1 − a ∗ i Z a ∗ i +1 a ∗ i log | a ∗ i − y | dy + 2 c d = − log | a ∗ i +1 − a ∗ i | − log | a ∗ i − a ∗ i − | + 2(1 + c d ) ≤ − h a ∗ + 2(1 + c d ) . Next, we bound T i (ˆ a ∗ ) from above. For the inner integral in (A.5), we use (A.11) to obtain Z a ∗ i +1 a ∗ i K ( x − y ) d ν a ∗ ( y ) ≤ − a ∗ i +1 − a ∗ i (cid:2) ( a ∗ i +1 − x ) log( a ∗ i +1 − x ) + ( x − a ∗ i ) log( x − a ∗ i ) (cid:3) + c d . Therefore, we have T i (ˆ a ∗ ) ≤ − a ∗ i +1 − a ∗ i ) (cid:20) ( a ∗ i +1 − a ∗ i ) log( a ∗ i +1 − a ∗ i ) −
12 ( a ∗ i +1 − a ∗ i ) (cid:21) + c d = − log( a ∗ i +1 − a ∗ i ) + 12 + c d ≤ − log h a ∗ + 12 + c d . Here we are in a position to provide an upper bound of F C K,Q ( n ) − F D K,Q ( n ). Proposition A.4.
Let a ∗ = ( a ∗ , . . . , a ∗ n ) ∈ R n be the minimizer of I D n satisfying (A.1), and let a ∗ and a ∗ n +1 be chosen such that ˆ a ∗ = ( a ∗ , a ∗ , . . . , a ∗ n , a ∗ n +1 ) ∈ R n +2 and h a ∗ ≤ | a ∗ j +1 − a ∗ j | ≤ j = 0 , n , where h a ∗ is the separation distance given by (A.3). Then, we have F C K,Q ( n ) − F D K,Q ( n ) ≤ − (3 n + 1) log h a ∗ + C n + e (1) n ( Q ; ˆ a ∗ ) + e (2) n ( Q ; ˆ a ∗ ) , (A.15)where C n = (cid:18)
52 + 3 c d (cid:19) n + 12 + c d . Proof.
By combining Inequalities (A.8), (A.13) and (A.14), we have (A.15).In addition, as a corollary of Proposition A.4, we can provide another error estimate of theproposed formula in terms of F C K,Q ( n ). Corollary A.5.
On the same assumption as Proposition A.4, we havesup f ∈ H ∞ ( D d ,w ) k f k≤ (cid:18) sup x ∈ R | f ( x ) − L n [ a ∗ ; f ]( x ) | (cid:19) ≤ ˆ C n ˆ D n ( h a ∗ ) /n exp − F C K,Q ( n ) n ! , (A.16)where ˆ C n = exp( C n /n ) and ˆ D n = exp h(cid:16) e (1) n ( Q ; ˆ a ∗ ) + e (2) n ( Q ; ˆ a ∗ ) (cid:17) /n i . Proof.
The conclusion follows from Inequalities (4.2) and (A.15).26 emark A.1.
We have not succeeded in estimating the separation distance h a ∗ yet. However,from numerical experiments, we observed that h a ∗ ∼ ( n − / ( Q ( x ) ≈ β | x | ) , (log n ) /n ( Q ( x ) ≈ β exp( | x | )) . (A.17)Furthermore, in [17, Section 5.2] we have had the rough estimates: F C K,Q ( n ) n ∼ ( n / ( Q ( x ) ≈ β | x | ) ,n/ log n ( Q ( x ) ≈ β exp( | x | )) . (A.18)Therefore, we expect thatˆ C n ( h a ∗ ) /n exp − F C K,Q ( n ) n ! ∼ ( n (3+1 /n ) / exp( − c ′ n / ) ( Q ( x ) ≈ β | x | ) , ( n/ log n ) /n exp( − c ′′ n/ log n ) ( Q ( x ) ≈ β exp( | x | )) . However, precise estimates such as (A.17) and (A.18) may be difficult. Therefore, these esti-mates, as well as the estimate of ˆ D nn