Algorithms for Nonnegative C^2(\mathbb{R}^2) Interpolation
aa r X i v : . [ m a t h . C A ] F e b Algorithms for Nonnegative C ( R ) Interpolation
Fushuai Jiang Garving K. LuliFebruary 12, 2021
Abstract
Let E ⊂ R be a finite set, and let f : E → [ ∞ ) . In this paper, we address the algorithmicaspects of nonnegative C interpolation in the plane. Specifically, we provide an efficient algo-rithm to compute a nonnegative C ( R ) extension of f with norm within a universal constantfactor of the least possible. We also provide an efficient algorithm to approximate the tracenorm. For integers m ≥
0, n ≥ , we write C m ( R n ) to denote the Banach space of m -times continuouslydifferentiable real-valued functions such that the following norm is finite k F k C m ( R n ) := sup x ∈ R n max | α | ≤ m | ∂ α F ( x ) | . We write C m + ( R n ) to denote the collection of nonnegative functions in C m ( R n ) . Let E ⊂ R n befinite. We write C m + ( E ) to denote the collection of functions f : E → [ ∞ ) . If S is a finite set, wewrite ( S ) to denote the number of elements in S . We use C to denote constants that depend onlyon m and n .In this paper, we provide algorithmic solutions to the following problems for m = n = . Thesealgorithms were announced in [11, 12]. Problem 1.
Let E ⊂ R n be a finite set. Let f : E → [ ∞ ) . Compute the order of magnitude of (1.1) k f k C m + ( E ) := inf (cid:8) k F k C m ( R n ) : F | E = f and F ≥ (cid:9) . Problem 2.
Let E ⊂ R n be a finite set. Let f : E → [ ∞ ) . Compute a nonnegative function F ∈ C m ( R n ) such that F | E = f and k F k C m ( R n ) ≤ C k f k C m + ( E ) . By “order of magnitude ” we mean the following: Two quantities M and ˜ M determined by E, f, m, n are said to have the same order of magnitude provided that C − M ≤ ˜ M ≤ CM , with C depending only on m and n . To compute the order of magnitude of ˜ M is to compute a number M such that M and ˜ M have the same order of magnitude.By “computing a function F ” from ( E, f ) , we mean the following: After processing the input ( E, f ) , we are able to accept queries consisting of a point x ∈ R n , and produce a list of numbers ( f α ( x ) : | α | ≤ m ) . The algorithm “computes the function F ” if for each x ∈ R n , we have ∂ α F ( x ) = f α ( x ) for | α | ≤ m . 1roblem 2 is an open problem posed in [7], and Problem 1 is closely related to Problem 2. Thetheoretical aspects of the problems for m = n = were addressed in [11, 12]. We refer the readersto [11, 12] for a more thorough discussion on the problems.In this paper, we content ourselves with an idealized computer with standard von Neumannarchitecture that is able to process exact real numbers. We refer the readers to [10] for discussionon finite-precision computing.In [11], we proved the following. Theorem 1.
Let E ⊂ R be a finite set. There exist (universal) constants C, D , and a map E : C + ( E ) × [ ∞ ) → C + ( R ) such that the following hold.(A) Let M ≥ . Then for all f ∈ C + ( E ) with k f k C + ( E ) ≤ M , we have E ( f, M ) = f on E and kE ( f, M ) k C ( R ) ≤ CM .(B) For each x ∈ R , there exists a set S ( x ) ⊂ E with ( S ( x )) ≤ D such that for all M ≥ and f, g ∈ C + ( E ) with k f k C + ( E ) , k g k C + ( E ) ≤ M and f | S ( x ) = g | S ( x ) , we have ∂ α E ( f, M )( x ) = ∂ α E ( g, M )( x ) for | α | ≤ A few remarks on Theorem 1 are in order. First of all, in [12], we showed that the extensionoperator E cannot be linear in general. The constant D appearing in Theorem 1 is called the depth ofthe extension operator E . This generalizes the notion of the depth of a linear extension operator firststudied by C. Fefferman in [4,5] (for further discussion on the depth of linear extension operators seealso G.K. Luli [13]). The depth of an extension operator (both linear and nonlinear) measures thecomputational complexity of the extension. The existence of a linear extension operator of boundeddepth is one of the main ingredients for the Fefferman-Klartag [9,10] and Fefferman [6] algorithms forsolving the interpolation problems without the nonnegative constraints; the algorithms in [6, 9, 10]are likely essentially the best possible.In this paper, we will provide another proof of Theorem 1 but with algorithmic complexity inmind. This is the content of Theorem 2.We start with a definition. Definition 1.1.
Let ¯ N ≥ be an integer. Let B = (cid:8) ξ , · · · , ξ ¯ N (cid:9) be a basis of R ¯ N . Let Ω ⊂ R ¯ N bea subset. Let X be a set. Let Ξ : Ω → X be a map.• We say Ξ has depth D , if there exists a D -dimensional subspace V = span ( ξ i , · · · , ξ i D ) , ξ i , · · · , ξ i D ∈ B , such that for all z , z ∈ Ω with π V ( z ) = π V ( z ) , we have Ξ ( z ) = Ξ ( z ) .Here, π V : R ¯ N → V is the natural projection.• Suppose Ξ has depth D . Let V = span ( ξ i , · · · , ξ i D ) be as above. By an efficient representationof Ξ , we mean a specification of the index set { i , · · · , i D } ⊂ (cid:8) · · · , ¯ N (cid:9) and an algorithm tocompute a map ˜ Ξ : Ω ∩ V → X in C D operations, i.e., given an input ω ∈ Ω ∩ V , we cancompute ˜ Ξ ( ω ) in C D operations. Here, the map ˜ Ξ agrees with Ξ on Ω ∩ V , and C D is aconstant depending only on D .Note that in general, the set Ω may have complicated geometry. For the purpose of this paper,we will only be considering when Ω is some Euclidean space or the first quadrant of some Euclideanspace. 2 emark . Suppose Ξ : R ¯ N → R is a linear functional. Recall from [10] that a “compact represen-tation ” of a linear functional Ξ : R ¯ N → R consists of a list of indices { i , · · · , i D } ⊂ (cid:8) · · · , ¯ N (cid:9) anda list of coefficients χ i , · · · , χ i D , so that the action of Ξ is characterized by Ξ : ( ξ , · · · , ξ ¯ N ) → D X ∆ = χ i ∆ · ξ i ∆ . Therefore, given v ∈ span ( ξ i , · · · , ξ i D ) , we can compute Ξ ( v ) by computing the dot product of twovectors of length D , which requires C D operations. The present notion of “efficient representation ”is a natural generalization adapted to the nonlinear nature of nonnegative interpolation (see [11,12]). Since a nonlinear map in general does not admit a simple representation, we emphasize thecomplexity of an extension operator rather than its structure.We think of C + ( E ) ∼ = [ ∞ ) N . We use the standard orthonormal frame R N as a basis for thepurpose of defining finite depth. We write P + to denote the vector space of polynomials with degreeno greater than two, and we write J + x F to denote the two-jet of F at x .The main theorem of the paper is the following. Theorem 2.
Suppose we are given a finite set E ⊂ R with ( E ) = N . Then there exists a collectionof maps (cid:8) Ξ x : x ∈ R (cid:9) , where Ξ x : C + ( E ) × [ ∞ ) → P + for each x ∈ R , such that the followinghold.(A) There exists a universal constant D such that for each x ∈ R , the map Ξ x ( · , · ) : C + ( E ) × [ ∞ ) → P + is of depth D .(B) Suppose we are given ( f, M ) ∈ C + ( E ) × [ ∞ ) with k f k C + ( E ) ≤ M . Then there exists a function F ∈ C + ( R ) such that J + x F = Ξ x ( f, M ) for all x ∈ R , k F k C ( R ) ≤ CM, and F ( x ) = f ( x ) for x ∈ E. (C) There is an algorithm, that takes the given data set E , performs one-time work, and thenresponds to queries.A query consists of a point x ∈ R , and the response to the query is the depth- D map Ξ x , givenin its efficient representation (see Definition 1.1).The one-time work takes CN log N operations and CN storage. The work to answer a query is C log N .Remark . Theorem 2(C) implies that for each x ∈ R , there exists a set S ( x ) ⊂ E with ( S ( x )) ≤ D such that for all ( f, M ) , ( g, M ) ∈ C + ( E ) × [ ∞ ) with k f k C + ( E ) , k g k C + ( E ) ≤ M and f | S ( x ) = g | S ( x ) ,we have Ξ x ( f, M ) = Ξ x ( g, M ) . Moreover, after one-time work using at most CN log N operationsand CN storage, we can perform the following task: Given x ∈ R , we can produce the set S ( x ) using no more than C log N operations.Using Theorem 2, we obtain an algorithmic version of the Sharp Finiteness Principle (see The-orem 5 in [12]): Theorem 3 (Algorithmic Sharp Finiteness Principle) . Let E ⊂ R with ( E ) = N < ∞ . Thenthere exist universal constants C , C , C , C , C and a list of subsets S , S , · · · , S L ⊂ E satisfyingthe following. A) We can compute the list { S ℓ : ℓ = · · · , L } from E, using one-time work of at most C N log N operations, and using storage at most C N .(B) ( S ℓ ) ≤ C for each ℓ = · · · , L .(C) L ≤ C N .(D) Given any f : E → [ ∞ ) , we have max ℓ = ··· ,L k f k C + ( S ℓ ) ≤ k f k C + ( E ) ≤ C max ℓ = ··· ,L k f k C + ( S ℓ ) . Theorem 3 without condition (A) is the same as Theorem 5 in [12].In this paper, we will prove Theorem 3 via Theorem 2. Our approach yields an alternate proofof Theorem 5 in [12]. The list of subsets { S ℓ : ℓ = · · · , L } that arises in this paper may be differentfrom that in Theorem 5 of [12]. It will be interesting to understand the relationship between them.Using Theorem 3, we can produce Algorithm 1, solving Problem 1. Algorithm 1
Nonnegative C ( R ) Interpolation Algorithm - Trace Norm
DATA: E ⊂ R finite with ( E ) = N . QUERY: f : E → [ ∞ ) . RESULT : The order of magnitude of k f k C + ( E ) . More precisely, the algorithm outputs anumber M ≥ such that both of the following hold. – We guarantee the existence of a function F ∈ C + ( R ) such that F | E = f and k F k C ( R ) ≤ CM . – We guarantee there exists no F ∈ C + ( R ) with norm at most M satisfying F | E = f . COMPLEXITY:–
Preprocessing E : at most CN log N operations and CN storage. – Answer query: at most CN operations.Using Theorem 2, we can produce Algorithm 2, solving Problem 2.4 lgorithm 2 Nonnegative C ( R ) Interpolation Algorithm - Interpolant
DATA: E ⊂ R finite with ( E ) = N . f : E → [ ∞ ) . M ≥ . ORACLE: k f k C + ( E ) ≤ M . RESULT : A query function that accepts a point x ∈ R and produces a list of numbers ( f α ( x ) : | α | ≤ ) that guarantees the following: There exists a function F ∈ C + ( R ) with k F k C ( R ) ≤ CM and F | E = f , such that ∂ α F ( x ) = f α ( x ) for | α | ≤ . The function F isindependent of the query point x , and is uniquely determined by ( E, f, M ) . COMPLEXITY:–
Preprocessing E : at most CN log N operations and CN storage. – Answer query: at most C log N operations.Theorem 2 also yields Algorithm 3 for computing the representative sets S ℓ in Theorem 3. Algorithm 3
Nonnegative C ( R ) Interpolation Algorithm - Representative Sets
DATA: E ⊂ R finite with ( E ) = N . RESULT : A query (set-valued) function that accepts a point x ∈ R and produces a subset S ( x ) ⊂ E , where S ( x ) agrees with that in Remark 1.2. COMPLEXITY:–
Preprocessing E : at most CN log N operations and CN storage. – Answer query: at most C log N operations.To see how to produce Algorithm 3 from Theorem 2, we simply note that each map Ξ x inTheorem 2 is stored in its efficient representation (see Definition 1.1). Thus, the set S ( x ) is givenby the corresponding set of indices in the efficient representation of Ξ x . Acknowledgment.
We are indebted to Jesús A. De Loera, Charles Fefferman, Kevin O’Neill, NaokiSaito, and Pavel Shvartsman, for their valuable comments. We also thank all the participants inthe 11th Whitney workshop for fruitful discussions, and Trinity College Dublin for hosting theworkshop.This project is supported by NSF Grant DMS-1554733. The first author is supported by theUC Davis Summer Graduate Student Researcher Award and the Alice Leung Scholarship in Math-ematics. The second author is supported by the UC Davis Chancellor’s Fellowship.
We use c ∗ , C ∗ , C ′ , etc., to denote universal constants. They may be different quantities in differentoccurrences. We will label them to avoid confusion when necessary.5e assume that we are given an ordered orthogonal coordinate system on R , specified by apair of unit vectors [ e , e ] . We use | · | to denote Euclidean distance. We use B ( x, r ) to denote thedisk of radius r centered at x . For X, Y ⊂ R , we write dist ( X, Y ) := inf x ∈ X,y ∈ Y | x − y | .We use α = ( α , α ) , β = ( β , β ) ∈ N , etc., to denote multi-indices. We write ∂ α to denote ∂ α e ∂ α e . We adopt the partial ordering α ≤ β if and only if α i ≤ β i for i =
1, 2 .By a square, we mean a set of the form Q = [ a, a + δ ) × [ b, b + δ ) for some a, b ∈ R and δ > 0 .If Q is a square, we write δ Q to denote the sidelength of the square. For λ > 0 , we use λQ to denotethe square whose center is that of Q and whose sidelength is λδ Q . Given two squares Q, Q ′ , wewrite Q ↔ Q ′ if closure ( Q ) ∩ closure ( Q ′ ) = ∅ .A dyadic square is a square of the form Q = [ k · i, 2 k · ( i + )) × [ k · j, 2 k · ( j + )) for some i, j, k ∈ Z . Each dyadic square Q is contained in a unique dyadic square with sidelength Q ,denoted by Q + .Let Ω ⊂ R n be a set with nonempty interior Ω such that Ω ⊂ Ω . For nonnegative integers m, n , we use C m ( Ω ) to denote the vector space of m -times continuously differentiable real-valuedfunctions up to the closure of Ω , whose derivatives up to order m are bounded. For F ∈ C m ( Ω ) , wedefine k F k C m ( Ω ) := sup x ∈ Ω max | α | ≤ m | ∂ α F ( x ) | . We write C m + ( Ω ) to denote the collection of functions F ∈ C m ( Ω ) such that F ≥ on Ω .Let E ⊂ R n be finite. We define the following. C m ( E ) := { f : E → R } ∼ = R ( E ) and k f k C m ( E ) := inf (cid:8) k F k C m ( R n ) : F | E = f (cid:9) ; C m + ( E ) := { f : E → [ ∞ ) } ∼ = [ ∞ ) ( E ) and k f k C m + ( E ) := inf (cid:8) k F k C m ( R n ) : F | E = f and F ≥ (cid:9) . We write P to denote the vector space of affine polynomials on R . It is a three-dimensional vectorspace. We use P + to denote the vector space of polynomials in R with degree no greater than two.It is a six-dimensional vector space.For x ∈ R and a function F twice continuously differentiable at x , we write J x F , J + x F to denotethe one-jet, two-jet of F at x , respectively, which we identify with the degree-one, degree-two Taylorpolynomials, respectively, J x F ( y ) := X | α | ≤ ∂ α F ( x ) α ! ( y − x ) α , and J + x F ( y ) := X | α | ≤ ∂ α F ( x ) α ! ( y − x ) α . (2.1)We use R x , R + x to denote the rings of one-jets, two-jets at x , respectively. The multiplications on R x and R + x are defined in the following way: P ⊙ x R := J x ( PR ) and P + ⊙ + x R + := J + x ( P + R + ) , for P, R ∈ R x and P + , R + ∈ R + x . 6et S ⊂ R n be a nonempty finite set. A Whitney field on S is an array of polynomials ~ P := ( P x ) x ∈ S , where P x ∈ R x for each x ∈ S .
Given ~ P = ( P x ) x ∈ S , we sometimes use the notation ( ~ P, x ) := P x for x ∈ S .
We write W ( S ) to denote the vector space of all Whitney fields on S . For ~ P = ( P x ) s ∈ S ∈ W ( S ) ,we define k ~ P k W ( S ) := max x ∈ S, | α | ≤ | ∂ α P x ( x ) | + max x,y ∈ S, x = y, | α | ≤ | ∂ α ( P x − P y )( x ) || x − y | − | α | . We note that k·k W ( S ) is a norm on W ( S ) .We write W + ( S ) to denote a subcollection of W ( S ) , such that ~ P ∈ W + ( S ) if and only if for each x ∈ S , there exists some M x ≥ such that(2.2) ( ~ P, x )( y ) + M x | y − x | ≥ for all y ∈ R . For ~ P ∈ W + ( S ) , we define k ~ P k W + ( S ) := k ~ P k W ( S ) + max x ∈ S (cid:16) inf (cid:10) M x ≥ : ( ~ P, x )( y ) + M x | y − x | ≥ for all y ∈ R (cid:11) (cid:17) . The next lemma is a Taylor-Whitney correspondence for C + ( R ) . (A) is simply Taylor’s theorem.See [8, 12] for a proof of (B). Lemma 2.1.
There exists a universal constant C w such that the following holds.Let E ⊂ R be a finite set.(A) Let F ∈ C + ( R ) . Let ~ P := ( J x F ) x ∈ E . Then ~ P ∈ W + ( E ) and k ~ P k W + ( E ) ≤ C w k F k C ( R ) .(B) There exists a map T Ew : W + ( E ) → C + ( R ) such that k T Ew ( ~ P ) k C ( R ) ≤ C w k ~ P k W + ( E ) and J x T Ew ( ~ P ) = ( ~ P, x ) for each x ∈ E . Let S ⊂ R be a finite set. We define the following two functions. Q = Q S : W ( S ) → [ ∞ ) ~ P = ( P x ) x ∈ S → X x ∈ S | α | ≤ | ∂ α P x ( x ) | + X x,y ∈ Sx = y | α | ≤ | ∂ α ( P x − P y )( x ) || x − y | − | α | , (2.3)and 7 = M S : W ( S ) → [ ∞ ] ~ P = ( P x ) x ∈ S → P x ∈ S | ∇ P x | P x ( x ) if P x ( x ) ≥ for each x ∈ S ∞ if there exists x ∈ S such that P x ( x ) < 0 . (2.4)In (2.4), we use the conventions that = and a0 = ∞ for a > 0 . Lemma 2.2.
Let S ⊂ R be a finite set with ( S ) ≤ N for some universal constant N . Let Q and M be as in (2.3) and (2.4) . Then there exists a universal constant C such that (2.5) C − k ~ P k W + ( S ) ≤ ( Q + M )( ~ P ) ≤ C k ~ P k W + ( S ) for all ~ P ∈ W + ( S ) . Moreover, ~ P ∈ W ( S ) \ W + ( S ) if and only if M ( ~ P ) = ∞ .Proof. We write
C, C ′ , etc., to denote universal constants.Fix ~ P = ( P x ) x ∈ S ∈ W + ( S ) .Suppose ( Q + M )( ~ P ) ≤ M . We want to show that(2.6) k ~ P k W + ( S ) ≤ CM.
Since each summand in the definition of Q in (2.3) is nonnegative, we have(2.7) max x ∈ S, | α | ≤ | ∂ α P x ( x ) | ≤ CM, and max x,y ∈ S,x = y, | α | ≤ | ∂ α ( P x − P y )( x ) || x − y | − | α | ≤ CM.
Since M ( ~ P ) ≤ M , we have(2.8) | ∇ P x | ≤ MP x ( x ) for x ∈ S. Therefore, we have(2.9) P x ( y ) + M4 | y − x | ≥ for all y ∈ R , x ∈ S. By the definition of k·k W + ( S ) , we see that (2.6) follows from (2.7) and (2.9).Suppose k ~ P k W + ( S ) ≤ M . We want to show that(2.10) ( Q + M )( ~ P ) ≤ CM.
By the definition of k·k W + ( S ) , we know that(2.11) max x ∈ S, | α | ≤ | ∂ α P x ( x ) | ≤ M, max x,y ∈ S,x = y, | α | ≤ | ∂ α ( P x − P y )( x ) || x − y | − | α | ≤ M, and(2.12) P x ( y ) + M | y − x | ≥ for all y ∈ R , x ∈ S.
8t follows from (2.11) that(2.13) Q ( ~ P ) ≤ CM . For each x ∈ S , restricting P x to each line in R passing through x and computing the discriminant,we can conclude from (2.12) that(2.14) | ∇ P x | ≤ CMP x ( x ) for x ∈ S. It follows from (2.14) that(2.15) M ( ~ P ) ≤ CM. (Recall that we use the convention = ). (2.10) then follows from (2.13) and (2.15).This proves (2.5).Now we turn to the second statement.Suppose ~ P ∈ W ( S ) is such that M ( ~ P ) = ∞ . Then at least one of the following holds:• P x ( x ) < 0 for some x ∈ S , in which case condition (2.2) fails for such P x , so ~ P / ∈ W + ( S ) .• There exists x ∈ S such that P x ( x ) = but ∇ P x = , in which case condition (2.2) fails forsuch P x , so ~ P / ∈ W + ( S ) .In conclusion, we have ~ P / ∈ W + ( S ) .Conversely, suppose ~ P / ∈ W + ( S ) . Then there exists x ∈ S such that condition (2.2) fails for P x .This means that either P x ( x ) < 0 , or P x ( x ) = but ∇ P x = . In either case, we have M ( ~ P ) = ∞ .Lemma 2.2 is proved.For the rest of the subsection, we fix a finite set S ⊂ R with ( S ) ≤ N , where N is a universalconstant. We also fix a function f : S → [ ∞ ) . We explain how to compute the order of magnitudeof k f k C + ( S ) .We adopt the following notation: For A, B ≥ , we write A ≈ B if there exists a universalconstant C such that C − A ≤ B ≤ CA .We define an affine subspace A f ⊂ W ( S ) by A f := (cid:10) ~ P = ( P x ) x ∈ S ∈ W ( S ) : P x ( x ) = f ( x ) and f ( x ) = ⇒ ∇ P x = for x ∈ S (cid:11) = (cid:10) ~ P = ( P x ) x ∈ S ∈ W + ( S ) : P x ( x ) = f ( x ) for x ∈ S (cid:11) . Note that A f has dimension · ( ( S ) − ( f − ( )) .Let Q and M be as in (2.3) and (2.4). By Lemma 2.1 and Lemma 2.2,(2.16) k f k C + ( S ) ≈ inf (cid:10) ( Q + M )( ~ P ) : ~ P ∈ A f (cid:11) . Let d := dim W ( S ) = ( S ) · dim P = ( S ) . We identify W ( S ) ∼ = R d via ( P x ) x ∈ S → ( P x ( x ) , ∂ e P x , ∂ e P x ) x ∈ S . We define the ℓ and ℓ -norms, respectively, on R d by the formulae k v k ℓ := d X i = | v i | and k v k ℓ := d X i = | v i | ! for v = ( v , · · · , v d ) ∈ R d . Consider the following objects. 9 Let L w : W ( S ) → R d be a linear isomorphism that maps ~ P ∈ W ( S ) to the vector in R d withcomponents ∂ α ( P y − P z )( y ) | y − z | − | α | , ∂ α P x S ( x S ) , | α | ≤ for suitable x S , y, z ∈ S in some order, so that(2.17) k L w ( ~ P ) k ℓ ( R dS ) ≈ Q ( ~ P ) for ~ P ∈ W ( S ) The construction of such L w is based on the technique of “clustering ” introduced in [1]. SeeRemark 3.3 of [1]. Since ( S ) is universally bounded, we can compute L w from S using atmost C operations.• Let V f ⊂ W ( S ) be a subspace defined by V f := (cid:10) ( P x ) x ∈ S : P x ( x ) = for x ∈ S \ f − ( ) and P x ≡ for x ∈ f − ( ) (cid:11) . Let Π f = ( Π xf ) x ∈ S : W ( S ) → V f be the natural projection defined by Π xf ( P x ) = (
0, ∂ e P x , ∂ e P x ) . Let ~ P f ∈ W ( S ) denote the vector ~ P f := ( f ( x ) , 0, 0 ) x ∈ S . It is clear that A f = ~ P f + V f .• Let L f = ( L xf ) x ∈ S : W ( S ) → W ( S ) be a linear endomorphism defined by L xf ( P x ) = P x √ f ( x ) for x ∈ S \ f − ( ) and L xf ≡ for x ∈ f − ( ) .We see that(2.18) M ( ~ P ) ≈ k L f Π f ( ~ P ) k ( R d ) for ~ P ∈ A f . Combining (2.17) and (2.18), we see that(2.19) ( Q + M )( ~ P ) ≈ k L f Π f ( ~ P ) k ( R d ) + k L w ( ~ P ) k ℓ ( R d ) for ~ P ∈ A f = ~ P f + V f . Let β := L w ( ~ P ) and A := ( L f Π f ) T ( L f Π f ) . We see from (2.16) and (2.19) that computing theorder of magnitude of k f k C + ( S ) amounts to solving the following optimization problem:(2.20) Minimize β t Aβ + k β k ℓ ( R d ) subject to L − β ∈ ~ P f + V f . We note that (2.20) is a convex quadratic programming problem with affine constraint. Wecan find the exact solution to (2.20) by solving for the associated Karush-Kuhn-Tucker conditions,which consist of a bounded system of linear equalities and inequalities [2]. Thus, we can computethe order of magnitude of k f k C + ( S ) in C operations. See Appendix A for details10 .3 Essential convex sets Definition 2.1.
Let E ⊂ R be a finite set.• For x ∈ R , S ⊂ E , and k ≥ , we define σ ( x, S ) := (cid:10) J x ϕ : ϕ ∈ C ( R ) , ϕ | S = and k ϕ k C ( R ) ≤ (cid:11) , and σ ♯ ( x, k ) := \ S ⊂ E, ( S ) ≤ k σ ( x, S ) . (2.21)• Let f : E → [ ∞ ) be given. For x ∈ R , S ⊂ E , k ≥ , and M ≥ , we define Γ + ( x, S, M, f ) := (cid:10) J x F : F ∈ C + ( R ) , F | S = f, and k F k C ( R ) ≤ M (cid:11) , and Γ ♯ + ( x, k, M, f ) := \ S ⊂ E, ( S ) ≤ k Γ + ( x, S, M, f ) . (2.22)Adapting the proof of the Finiteness Principle for nonnegative C ( R ) interpolation (Theorem4 of [12]), we have the following. Lemma 2.3.
There exists a universal constant C such that the following holds. Let E ⊂ R be afinite set. Let σ and σ ♯ be as in Definition 2.1. Then for any x ∈ R , C − · σ ♯ ( x, 16 ) ⊂ σ ( x, E ) ⊂ C · σ ( x, 16 ) . We will use the data structure introduced by Callahan and Kosaraju [3].
Lemma 2.4 (Callahan-Kosaraju decomposition) . Let E ⊂ R n with ( E ) = N < ∞ . Let κ > 0 . Wecan partition E × E \ diagonal ( E ) into subsets E ′ × E ′′ , · · · , E ′ L × E ′′ L satisfying the following.(A) L ≤ C ( κ, n ) N .(B) For each ℓ = · · · , L , we have diam E ′ ℓ , diam E ′′ ℓ ≤ κ · dist (cid:0) E ′ ℓ , E ′′ ℓ (cid:1) . (C) Moreover, we may pick x ′ ℓ ∈ E ′ ℓ and x ′′ ℓ ∈ E ′′ ℓ for each ℓ = · · · , L , such that the x ′ ℓ , x ′′ ℓ for ℓ = · · · , L can all be computed using at most C ( κ, n ) N log N operations and C ( κ, n ) N storage.Here, C ( κ, n ) is a constant that depends only on κ and n . Algorithm 1: Estimation of trace norm
In this section, we prove Theorem 3 by assuming Theorem 2, whose proof will appear in Section5.7.With a slight tweak, the argument in the proof of Lemma 3.1 in [6] yields the following.
Lemma 3.1.
Let E ⊂ R be a finite set. Let κ > 0 be a constant that is sufficiently small. Let E ′ ℓ , E ′′ ℓ be as in Lemma 2.4 with κ = κ . Suppose ~ P = ( P x ) x ∈ E ∈ W + ( E ) satisfies the following.(A) P x ∈ Γ + ( x, ∅ , M, f ) for each x ∈ E , with Γ + as in (2.22) .(B) (cid:12)(cid:12)(cid:12) ∂ α ( P x ′ ℓ − P x ′′ ℓ )( x ′′ ℓ ) (cid:12)(cid:12)(cid:12) ≤ M | x ′ ℓ − x ′′ ℓ | − | α | for | α | ≤
1, ℓ = · · · , L .Then k ~ P k W + ( E ) ≤ CM . Recall Lemma 3.2 of [6].
Lemma 3.2.
Let E ⊂ R be a finite set. Let E ′ ℓ and E ′′ ℓ be as in Lemma 2.4 with ℓ = · · · , L . Thenevery x ∈ E arises as an x ′ ℓ for some ℓ ∈ { · · · , L } . We now have all the ingredients for the proof of Theorem 3.
Proof of Theorem 3 Assuming Theorem 2.
Let E ⊂ R be a finite set. Let (cid:8) Ξ x , x ∈ R (cid:9) be as inTheorem 2. For each x ∈ E , let S ( x ) be as in Remark 1.2.Let κ be as in Lemma 3.1. Let ( x ′ ℓ , x ′′ ℓ ) ∈ E × E , ℓ = · · · , L , be as in Lemma 2.4 with κ = κ .We set(3.1) S ℓ := (cid:8) x ′ ℓ , x ′′ ℓ (cid:9) ∪ S ( x ′ ℓ ) ∪ S ( x ′′ ℓ ) , ℓ = · · · , L. Conclusion (A) follows from Theorem 2(C), Remark 1.2, and Lemma 2.4.Conclusion (B) follows from Theorem 2(C) and Remark 1.2.Conclusion (C) follows from Lemma 2.4(C).Now we verify conclusion (D). We modify the argument in [6].Fix f : E → [ ∞ ) . Set(3.2) M := max ℓ = ··· ,L k f k C + ( S ℓ ) . Thanks to (3.2), we see that k f k C + ( S ℓ ) ≤ M for ℓ = · · · , L . Thus, for each ℓ = · · · , L , thereexists F ℓ ∈ C + ( R ) such that(3.3) k F ℓ k C ( R ) ≤ and F ℓ ( x ) = f ( x ) for x ∈ S ℓ . Fix such F ℓ . For ℓ = · · · , L , we define(3.4) f ℓ : E → [ ∞ ) by f ℓ ( x ) := F ℓ ( x ) for x ∈ E. From (3.3) and (3.4), we see that(3.5) k f ℓ k C + ( E ) ≤ for ℓ = · · · , L. ℓ = · · · , L , we define(3.6) P ′ ℓ := J x ′ ℓ (cid:16) Ξ x ′ ℓ ( f ℓ , 2M ) (cid:17) and P ′′ ℓ := J x ′′ ℓ (cid:16) Ξ x ′′ ℓ ( f ℓ , 2M ) (cid:17) . We will show that the assignment (3.6) unambiguously defines a Whitney field over E . Claim 3.1.
Let ℓ , ℓ ∈ { · · · , L } .(a) Suppose x ′ ℓ = x ′ ℓ . Then P ′ ℓ = P ′ ℓ .(b) Suppose x ′′ ℓ = x ′′ ℓ . Then P ′′ ℓ = P ′′ ℓ .(c) Suppose x ′ ℓ = x ′′ ℓ . Then P ′ ℓ = P ′′ ℓ .Proof of Claim 3.1. We prove (a). The proofs for (b) and (c) are similar.Suppose x ′ ℓ = x ′ ℓ =: x . Let S ( x ) be as in Remark 1.2. By (3.1), we see that S ( x ) ⊂ S ℓ ∩ S ℓ . Therefore, we have f ℓ ( x ) = f ℓ ( x ) for x ∈ S ( x ) . Thanks to Theorem 2(A), Remark 1.2, and (3.5), we see that Ξ x ( f ℓ , 2M ) = Ξ x ( f ℓ , 2M ) . By (3.6), we see that P ℓ = P ℓ . This proves (a).By Lemma 3.2, there exists a pair of maps:A surjection π : { · · · , L } → E such that π ( ℓ ) = x ′ ℓ for ℓ = · · · , L , andAn injection ρ : E → { · · · , L } such that x ′ ρ ( x ) = x for x ∈ E, i.e., π ◦ ρ = id E . (3.7)The surjection π is determined by the Callahan-Kosaraju decomposition (Lemma 2.4), but thechoice of ρ is not necessarily unique.Thanks to Claim 3.1 and the fact that E ′ ℓ × E ′′ ℓ ⊂ E × E \ diagonal ( E ) , assignment (3.6) producesfor each x ∈ E a uniquely defined polynomial(3.8) P x = J x (cid:0) Ξ x ( f ρ ( x ) , 2M ) (cid:1) , with Ξ x as in Theorem 2 and ρ ( x ) as in (3.7). Note that, as shown in Claim 3.1, the polynomial P x in (3.8) is independent of the choice of ρ as a right-inverse of π in (3.7).Thanks to Theorem 2(B) and (3.5)–(3.8), for each ℓ = · · · , L , there exists a function ˜ F ℓ ∈ C ( R ) such that(3.9) k ˜ F ℓ k C ( R ) ≤ CM and ˜ F ℓ ≥ on R ; (3.10) ˜ F ℓ = f ℓ ( x ) = f ( x ) for x ∈ S ℓ ; and133.11) J x ′ ℓ ˜ F ℓ = P x ′ ℓ = J x ′ ℓ (cid:16) Ξ x ′ ℓ ( f ℓ , 2M ) (cid:17) , and J x ′′ ℓ ˜ F ℓ = P x ′′ ℓ = J x ′′ ℓ (cid:16) Ξ x ′′ ℓ ( f ℓ , 2M ) (cid:17) . Thanks to (3.9) and (3.10), we have(3.12) P x ′ ℓ ∈ Γ + ( x ′ ℓ , { x ′ ℓ } , CM, f ) for ℓ = · · · , L. Thanks to (3.9) and (3.11), we have(3.13) (cid:12)(cid:12)(cid:12) ∂ α ( P x ′ ℓ − P x ′′ ℓ )( x ′′ ℓ ) (cid:12)(cid:12)(cid:12) ≤ CM (cid:12)(cid:12) x ′ ℓ − x ′′ ℓ (cid:12)(cid:12) − | α | for | α | ≤
1, ℓ = · · · , L. Therefore, by Lemma 3.1, (3.12), and (3.13), the Whitney field ~ P = ( P x ) x ∈ E , with P x as in (3.8),satisfies ~ P ∈ W + ( E ) , P x ( x ) = f ( x ) for x ∈ E, and k ~ P k W + ( E ) ≤ CM.
By Lemma 2.1(B), there exists a function F ∈ C + ( R ) such that k F k C ( R ) ≤ CM and J x F = P x foreach x ∈ E . In particular, F ( x ) = P x ( x ) = f ( x ) for each x ∈ E . Thus, k f k C + ( E ) ≤ CM . This provesconclusion (D).Theorem 3 is proved. Below are the steps of Algorithm 1.Step 1. Compute S , · · · , S L from E as in Theorem 3.Step 2. Read f : E → [ ∞ ) .Step 3. For ℓ = · · · , L , compute a number M ℓ such that M ℓ has the same order of magnitude as k f k C + ( S ℓ ) .Step 4. Return M := max { M ℓ : ℓ = · · · , L } .The number M produced in Step 4 has the same order of magnitude as k f k C + ( E ) , thanks toTheorem 3 and Lemma 2.1. Therefore, Algorithm 1 accomplishes what we claim to do.We now analyze the complexity of Algorithm 1.By Theorem 3, Step 1 requires no more than CN log N operations and CN storage.Step 3 requires no more than CN operations. Indeed, on one hand, computing each M ℓ requiresno more than C operations, thanks to the discussion in Section 2.2; on the other hand, we need tocarry out L computations, with L ≤ CN .Finally, Step 4 requires no more than CN operations.This concludes our discussion of Algorithm 1. σ ♯ This and the next sections will be devoted to the proof of Theorem 2. To prepare the way, in thissection, we introduce the relevant objects and show how they can be computed efficiently.We begin by reviewing some key objects introduced in [9, 10], which we will use to effectivelyapproximate the shapes of σ ♯ ( x, 16 ) for x ∈ E .We will be working with C ( R ) functions instead of C + ( R ) functions.14 .1 Parameterized approximate linear algebra problems (PALP) Let ¯ N ≥ . Let (cid:8) ξ , · · · , ξ ¯ N (cid:9) be the standard basis for R ¯ N . We recall the following definition inSection 6 of [10]. Definition 4.1.
A parameterized approximate linear algebra problem (PALP for short) is an ob-ject of the form:(4.1) A = (cid:2) ( λ , · · · , λ i max ) , ( b , · · · , b i max ) , ( ǫ , · · · , ǫ i max ) (cid:3) , where• Each λ i is a linear functional on P , which we will refer to as a “linear functional”;• Each b i is a linear functional on C ( E ) , which we will refer to as a “target functional”; and• Each ǫ i ∈ [ ∞ ) , which we will refer to as a “tolerance”.Given a PALP A in the form (4.1), we introduce the following terminologies:• We call i max the length of A ;• We say A has depth D if each of the linear functionals b i on R ¯ N has depth D with respect tothe basis (cid:8) ξ , · · · , ξ ¯ N (cid:9) (see Definition 1.1).Recall Definition 1.1. We assume that every PALP is “efficiently stored”, namely, each of thetarget functionals are stored in its efficient representation. In particular, given a PALP A of theform (4.1) and a target b i of A , we have access to a set of indices { i , · · · , i D } ⊂ { · · · , N } , such that b i is completely determined by its action on { ξ i , · · · , ξ i D } ⊂ { ξ , · · · , ξ N } . Here i D = depth ( b i ) .We define(4.2) S ( b i ) := { x i , · · · , x i D } ⊂ E. Given a PALP of the form (4.1), we define(4.3) S ( A ) := i max [ i = S ( b i ) ⊂ E with S ( b i ) as in (4.2). Definition 4.2.
A blob in P is a family ~ K = ( K M ) M ≥ of (possibly empty) convex subsets K M ⊂ V parameterized by M ∈ [ ∞ ) , such that M < M ′ implies K M ⊆ K M ′ . We say two blobs ~ K =( K M ) M ≥ and ~ K ′ = ( K ′ M ) M ≥ are C -equivalent if K C − M ⊂ K ′ M ⊂ K CM for each M ∈ [ ∞ ) .Let A be a PALP of the form (4.1). For each ϕ ∈ C ( E ) ∼ = R ( E ) , we have a blob defined by ~ K ϕ ( A ) = ( K ϕ ( A , M )) M ≥ , where K ϕ ( A , M ) := { P ∈ P : | λ i ( P ) − b i ( ϕ ) | ≤ Mǫ i for i = · · · , i max } ⊂ V. (4.4) 15n this paper, we will be mostly interested in the centrally symmetric (called “homogeneous ” in [10])polytope defined by setting ϕ ≡ :(4.5) σ ( A ) := K ( A , 1 ) . Note that σ ( A ) is never empty, since it contains the zero polynomial. Let E ⊂ R be a finite set with ( E ) = N . We assume that E is labeled: E = { x , · · · , x N } . Weidentify C ( E ) ∼ = R N with respect to the standard basis { ξ , · · · , ξ N } for R N . Definition 4.3.
For each x ∈ R and ϕ ∈ C ( E ) , we define a blob ~ Σ ϕ ( x ) = ( Σ ϕ ( x, M )) M ≥ where Σ ϕ ( x, M ) := (cid:12) P ∈ P : There exists G ∈ C ( R ) with k G k C ( R ) ≤ M, G | E = ϕ, and J x G = P. (cid:13) (4.6)It is clear from the definition of σ in (2.21) that σ ( x, E ) = Σ ( x, 1 ) . Therefore, thanks to Lemma 2.3, we have(4.7) C − · σ ♯ ( x, 16 ) ⊂ Σ ( x, 1 ) ⊂ C · σ ♯ ( x, 16 ) , x ∈ E for some universal constant C .We summarize some relevant results from [10]. Lemma 4.1.
Let E ⊂ R be finite. Using at most CN log N operations and CN storage, we cancompute a list of PALPs { A ( x ) : x ∈ E } such that the following hold.(A) There exists a universal constant D such that for each x ∈ E , A ( x ) has length no greater than = dim P and has depth D .(B) For each given x ∈ R and ϕ ∈ C ( E ) , the blobs ~ K ϕ ( A ( x )) as in (4.4) and ~ Σ ϕ ( x ) as in (4.6) are C -equivalent. See Section 11 of [10] for Lemma 4.1(A), and Sections 10, 11, and Lemma 34.3 of [10] for Lemma4.1(B).The main lemma of this section is the following.
Lemma 4.2.
Let E ⊂ R be given. Let { A ( x ) : x ∈ E } be as in Lemma 4.1. Recall the definitions of σ and S ( A ( x )) as in (2.21) and (4.3) . Then there exists a universal constant C such that, for each x ∈ E , C − · σ ( x, S ( A ( x ))) ⊂ σ ♯ ( x, 16 ) ⊂ C · σ ( x, S ( A ( x ))) . roof. For centrally symmetric σ, σ ′ ⊂ P , we write σ ≈ σ ′ if there exists a universal constant C such that C − · σ ⊂ σ ′ ⊂ C · σ . Thus, we need to show σ ( x, A ( x )) ≈ σ ♯ ( x, 16 ) for x ∈ E .Thanks to Lemma 2.3, Lemma 4.1(B) (applied to ϕ ≡ ), (4.5), and (4.7), we have(4.8) σ ♯ ( x, 16 ) ≈ σ ♯ ( x, E ) ≈ K ( A ( x ) , 1 ) = σ ( A ( x )) for x ∈ E. Therefore, it suffices to show that σ ( x, S ( A ( x ))) ≈ σ ( A ( x )) for x ∈ E. From (4.8) and the definition of σ in (2.21), we see that σ ( A ( x )) ⊂ C · σ ( x, E ) ⊂ C · σ ( x, S ( A ( x ))) . It remains to show that σ ( x, S ( A ( x ))) ⊂ C · σ ( A ( x )) . Let x ∈ E and let P ∈ σ ( x, S ( A ( x ))) . Then there exists ϕ ∈ C ( R ) such that k ϕ k C ( R ) ≤ , ϕ ( x ) = for all x ∈ S ( A ( x )) , and J x ( ϕ ) = P . Note that ϕ | E ∈ C ( E ) . We abuse notation and write ϕ in place of ϕ | E when there is no possibility of confusion.It is clear from the definition of Σ ϕ ( x, M ) in (4.6) that P ∈ Σ ϕ ( x, 1 ) . By Lemma 4.1(B), we have P ∈ K ϕ ( A ( x ) , C ) with K ϕ ( A ( x ) , C ) as in (4.4). In particular, we have(4.9) | λ i ( P ) − b i ( ϕ ) | ≤ Cǫ i for i = · · · , L = length ( A ( x )) . Here, the λ , · · · , λ L , b , · · · , b L , and ǫ , · · · , ǫ L , respectively, are the linear functionals, target func-tionals, and the thresholds of A ( x ) . However, by the definition of S ( A ( x )) in (4.3) and the fact that ϕ ≡ on S ( A ( x )) , we see that (4.9) simplifies to | λ i ( P ) | ≤ Cǫ i for i = · · · , L = length ( A ( x )) . This is equivalent to the statement P ∈ K ( A ( x ) , C ) = C · σ ( A ( x )) . Lemma 4.2 is proved. C -optimal interpolant Let E ⊂ R be a finite set. We fix E throughout the rest of the paper.17 .1 Calderón-Zygmund squares Let ˜ σ ⊂ R be a symmetric convex set. We define(5.1) diam ˜ σ := · sup u ∈ R , | u | = p ˜ σ ( u ) , where p ˜ σ ( u ) is a gauge function given by(5.2) p ˜ σ ( u ) := sup { r ≥ : ru ⊂ ˜ σ } . Let { A ( x ) : x ∈ E } be as in Lemma 4.1, and let σ ( A ( x )) ⊂ P be as in (4.5). Note that foreach x ∈ E , σ ( A ( x )) ⊂ P two-dimensional. Indeed, thanks to Lemma 4.1(B) (with ϕ ≡ ), any P ∈ σ ( A ( x )) , x ∈ E , must have P ( x ) = . Thus, for each x ∈ E , we can identify σ ( A ( x )) as a subsetof R via the map(5.3) σ ( A ( x )) ∋ P → ( ∇ P · e , ∇ P · e ) , where { e , e } is the chosen orthonormal system.Let A , A > 0 be sufficiently large dyadic numbers. Let { A ( x ) : x ∈ E } be as in Lemma 4.1. Wesay a dyadic square Q is OK if the following hold.• Either ( E ∩ ) ≤ , or diam σ ( A ( x )) ≥ A δ Q for all x ∈ E ∩ . Here and below, thediam ( σ ( A ( x ))) is defined using the formula (5.1) via the identification (5.3).• δ Q ≤ A − . Definition 5.1.
We write Λ to denote the collection of dyadic squares Q such that both of thefollowing hold.(A) Q is OK (see above).(B) Suppose δ Q < A − , then Q + is not OK. Remark . Note that there are two differences in the definition of Λ than those in [11, 12].• We use in the definition of Λ instead of using . This has the advantage that + ⊂ Q = .• We do not require diam σ ( A ( x )) ≥ A δ Q for x ∈ E ∩ when ( E ∩ ) = .We will provide explanation when these differences change the structure of the analysis. Otherwise,we will simply add the word “variant” to our reference to results in [11, 12]. Lemma 5.1. Λ enjoys the following properties.(A) Λ forms a cover of R with good geometry:(A1) R = S Q ∈ Λ Q ; A2) If
Q, Q ′ ∈ Λ with ( + G ) Q ∩ ( + G ) Q ′ = ∅ , then C − δ Q ≤ δ Q ′ ≤ Cδ Q ; and as a consequence, for each Q ∈ Λ , (cid:8) Q ′ ∈ Λ : ( + c G ) Q ′ ∩ ( + c G ) Q = ∅ (cid:9) ≤ C ′ . Here,
C, C ′ are universal constants, and c G is a sufficiently small constant, say .(B) Let Q ∈ Λ . Then there exists ϕ ∈ C ( R ) such that (5.4) ρ ( E ∩ ) ⊂ { ( t, ϕ ( t )) : t ∈ R } , where ρ is some rotation about the origin depending only on Q . Moreover, ϕ satisfies theestimates (5.5) (cid:12)(cid:12)(cid:12)(cid:12) d m dt m ϕ ( t ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ CA − δ − mQ for m =
1, 2, with A as in Definition 5.1. Furthermore, suppose for some x ∈ E ∩ and a unit vector u ,we have diam σ ( A ( x )) = p σ ( A ( x )) ( u ) with diam σ ( A ( x )) and p σ ( A ( x )) ( u ) as in (5.1) and (5.2) . Then we can take ϕ to satisfy thefollowing property:(B1) We can take ρ in (5.4) to be the rotation specified by u → e ;(B2) We can take x = (
0, ϕ ( )) .As a consequence, there exists a C -diffeomorphism Φ : R → R defined by Φ ◦ ρ ( t , t ) = ( t , t − ϕ ( t )) where ρ is the rotation as in (5.4) ,such that Φ ( E ∩ ) ⊂ R × { t = } and | ∇ m Φ | , (cid:12)(cid:12) ∇ m Φ − (cid:12)(cid:12) ≤ CA − δ − mQ for m =
1, 2 , with A as in Definition 5.1.Remark . Lemma 5.1(A) can be found in Section 21 of [10]. See also Lemma 5.1 of [12]. Lemma5.1(B) follows from the proofs of Lemma 5.4 and 5.5 with a minor modification: For Q ∈ Λ with ( E ∩ ) ≤ , we can simply take ϕ to be a suitable constant function on R .We recall the following results from [10]. Lemma 5.2.
After one-time work using at most CN log N operations and CN storage, we canperform each of the the following tasks using at most C log N operations.(A) (Section 26 of [10]) Given a point x ∈ R , we compute a list Λ ( x ) := { Q ∈ Λ : ( + c G ) Q ∋ x } .(B) (Section 27 of [10]) Given a dyadic square Q ⊂ R , we can compute Empty ( Q ) , with Empty ( Q ) = True if E ∩ = ∅ , and Empty ( Q ) = False if E ∩ = ∅ . C) (Section 27 of [10]) Given a dyadic square Q ⊂ R with E ∩ = ∅ , we can compute Rep ( Q ) ∈ E ∩ , with the property that Rep ( Q ) ∈ E ∩ if E ∩ = ∅ . Definition 5.2.
We define the following subcollections of Λ :(5.6) Λ ♯♯ := (cid:8) Q ∈ Λ ♯ : E ∩ ( + c G ) Q = ∅ (cid:9) (5.7) Λ ♯ := { Q ∈ Λ : E ∩ = ∅ } ;(5.8) Λ empty := (cid:8) Q ∈ Λ \ Λ ♯ : δ Q < A − (cid:9) with A as in Definition 5.1.We can think of Λ ♯♯ as the collection of squares with the most “concentrated ” information, Λ ♯ as the largest collection of squares that contain information while still having good local geometry,and Λ empty as the collection of squares that do not contain information in their five-time dilation,but are sufficiently small to detect nearby accumulation of points in E .We begin with the analysis of Λ empty and Λ ♯ . Lemma 5.3.
After one-time work using at most CN log N operations and CN storage, we canperform the following task using at most C log N operations: Given Q ∈ Λ , we can decide if Q ∈ Λ ♯ , Q ∈ Λ empty , or Q ∈ Λ \ ( Λ ♯ ∪ Λ empty ) .Proof. This is a direct application of Lemma 5.2(B,C) to Q .The next lemma tells us how to relay information to squares in Λ empty . Lemma 5.4.
We can compute a map (5.9) µ : Λ empty → Λ ♯ that satisfies (5.10) ( + c G ) µ ( Q ) ∩ = ∅ for Q ∈ Λ empty . The one-time work uses at most CN log N operations and CN storage. After that, we can answerqueries using at most C log N operations. A query consists of a square Q ∈ Λ empty , and the responseto the query is another square µ ( Q ) that satisfies (5.10) .Proof. Suppose Q ∈ Λ empty . Then we have E ∩ + = ∅ . By the geometry of Λ , we have + ⊂ . Hence, E ∩ = ∅ . Therefore, the map Rep in Lemma 5.2(C) is defined for Q .We set(5.11) x := Rep ( Q ) ⊂ E ∩ with Rep as in Lemma 5.2. Note that x / ∈ , since Q ∈ Λ empty .Let Λ ( x ) ⊂ Λ be as in Lemma 5.2(A). Let Q ′ ∈ Λ ( x ) . By the defining property of Λ ( x ) andthe fact that x ∈ E , we have Q ′ ∈ Λ ♯ . Set µ ( Q ) := Q ′ ∈ Λ ♯ . By the previous comment, we have(5.12) ( + c G ) µ ( Q ) ∋ x. ( + c G ) µ ( Q ) ∩ = ∅ . (5.10) is satisfied.By Lemma 5.2(A,C), the tasks Λ ( · ) and Rep ( · ) require at most C log N operations, after one-time work using at most CN log N operations and CN storage. Therefore, computing µ ( Q ) requiresat most C log N operations, after one-time work using at most CN log N operations and CN storage.This proves Lemma 5.4. Lemma 5.5.
After one-time work using at most CN log N operations and CN storage, we canperform the following task using at most C log N operations: Given Q ∈ Λ ♯ , compute a pair of unitvectors u Q , u ⊥ Q ∈ R , such that the following hold.(A) u Q is orthogonal to u ⊥ Q , and the orthogonal system [ u ⊥ Q , u Q ] has the same orientation as [ e , e ] .(B) Let ρ be the rotation about the origin specified by u Q → e , then there exists a function ϕ ∈ C ( R ) that satisfies (5.4) and (5.5) with this particular ρ .Proof. Fix Q ∈ Λ ♯ . This means that E ∩ = ∅ . In particular, Rep ( Q ) is defined, and by Lemma5.2(C), x := Rep ( Q ) ∈ E ∩ Computing x requires at most C log N operations, after one-time work using at most CN log N operations and CN storage.Let A ( x ) be as in Lemma 4.1, and let σ ( A ( x )) be as in (4.5). By Lemma 4.1(B) (with ϕ ≡ ),any P ∈ σ ( A ( x )) must satisfy P ( x ) = . by Lemma 4.1(A) and definitions (4.4), (4.5) of σ ( A ( x )) ,we see that σ ( A ( x )) is a two-dimensional parallelogram in P centered at the zero polynomial.Therefore, we have diam σ ( A ( x )) = length ( ∆ ) , where diam is defied in (5.1) and ∆ is the longer diagonal of σ ( A ( x )) .Set u Q to be a unit vector parallel to ∆ . Lemma 5.5(B) then follows from Lemma 5.1(B).We compute another vector u ⊥ Q such that (cid:10) u Q , u ⊥ Q (cid:11) satisfies Lemma 5.5(A). Computing (cid:10) u Q , u ⊥ Q (cid:11) from σ ( A ( x )) uses elementary linear algebra, and requires at most C operations.Lemma 5.5 is proved. Lemma 5.6.
After one-time work using at most CN log N operations and CN storage, we canperform the following task using at most C log N operations: Given Q ∈ Λ , we can compute a point x ♯ Q ∈ Q such that (5.13) dist (cid:16) x ♯ Q , E (cid:17) ≥ c δ Q for some universal constant c ≥ .Proof. Let Q ∈ Λ be given.Suppose Empty ( Q ) = True , with
Empty ( · ) as in Lemma 5.2(B). We set x ♯ Q := center ( Q ) . It is clear that x ♯ Q ∈ Q and (5.13) holds. 21uppose Empty ( Q ) = False . Let x := Rep ( Q ) ∈ E ∩ .Suppose x / ∈ , then E ∩ = ∅ by Lemma 5.2(C). Again, we set x ♯ Q := center ( Q ) . It is clear that x ♯ Q ∈ Q and (5.13) holds.Suppose x ∈ . This means that Q ∈ Λ ♯ with Λ ♯ as in (5.7). Let u Q be as in Lemma 5.5.By Lemma 5.1(B), we have E ∩ ⊂ { ( t, ϕ ( t )) : t ∈ R } up to the rotation u Q → e , and thefunction ϕ satisfies (cid:12)(cid:12) d m dt m ϕ ( t ) (cid:12)(cid:12) ≤ CA − δ − mQ for m =
1, 2 , with A as in Definition 5.1. Therefore,by the defining property of u Q in Lemma 5.5, we have E ∩ ⊂ (cid:10) y ∈ R : | ( y − x ) · u Q | ≤ CA − | y − x | (cid:11) =: Z ( x ) . Suppose dist ( center ( Q ) , Z ( x )) ≥ δ Q /1024 . We set x ♯ Q := center ( Q ) . In this case, it is clear that x ♯ Q ∈ Q and (5.13) holds.Suppose dist ( center ( Q ) , Z ( x )) < δ Q /1024 . We set x ♯ Q := center ( Q ) + δ Q · u Q . It is clear that x ♯ Q ∈ Q . For sufficiently large A , we also have dist (cid:16) x ♯ Q , Z ( x ) (cid:17) ≥ cδ Q for someconstant c depending only on A . Thus, (5.13) holds.After one-time work using at most CN log N operations and CN storage, the procedure Empty ( Q ) requires at most C log N operations by Lemma 5.2(B); the procedure Rep ( Q ) requires at most C log N operations by Lemma 5.2(C); computing the vector u Q requires at most C log N operations;and computing the distance between center ( Q ) and Z ( x ) is a routine linear algebra problem, andrequires at most C operations.Lemma 5.6 is proved.We now turn our attention to Λ ♯♯ as in (5.6). Lemma 5.7.
Using at most CN log N operations and CN storage, we can compute the list Λ ♯♯ asin (5.6) .Proof. This is a direct application of Lemma 5.2(A) to each x ∈ E .The next lemma states that we can efficiently sort the data contained in squares in Λ ♯♯ . Lemma 5.8.
Using at most CN log N operations and CN storage, we can compute the following.For each Q ∈ Λ ♯♯ with Λ ♯♯ as in (5.6) , we can compute a sorted list of numbers Proj u ⊥ Q ( E ∩ ( + c G ) Q − Rep ( Q )) ⊂ R , where u ⊥ Q is as in Lemma 5.5, Proj u ⊥ Q is the orthogonal projection onto R u ⊥ Q , and Rep ( Q ) is as inLemma 5.2(C). roof. By the bounded intersection property in Lemma 5.1(A), we have(5.14) ( Λ ♯♯ ) ≤ CN.
From the definitions of Λ ♯♯ and Λ ♯ in (5.6) and (5.7), we see that Λ ♯♯ ⊂ Λ ♯ . Therefore, we cancompute Rep ( Q ) and u ⊥ Q for each Q ∈ Λ ♯♯ using at most C log N operations, by Lemma 5.2(B) andLemma 5.5.Recall from Lemma 5.7 that we can compute the list Λ ♯♯ by computing each Λ ( x ) for x ∈ E ,with Λ ( x ) as in Lemma 5.2(A). During this procedure, we can store the information ( + c G ) Q ∋ x for Q ∈ Λ ( x ) .By the bounded intersection property in Lemma 5.1(A), we have(5.15) X Q ∈ Λ ♯♯ ( E ∩ ( + c G ) Q ) ≤ CN.
By Lemma 5.2(A) and (5.15), we can compute the list (cid:10) E ∩ ( + c G ) Q : Q ∈ Λ ♯♯ (cid:11) using at most CN log N operations and CN storage. Then, by Lemma 5.2(C), Lemma 5.5, and(5.14), we can compute the unsorted list(5.16) Proj u ⊥ Q ( E ∩ ( + c G ) Q − Rep ( Q )) for each Q ∈ Λ ♯♯ using at most CN log N operations and CN storage.For each Q ∈ Λ ♯♯ , we can sort the list Proj u ⊥ Q ( E ∩ ( + c G ) Q − Rep ( Q )) using at most CN Q log N Q operations, where N Q := ( E ∩ ( + c G ) Q ) . By (5.15), we can sort the all the lists of the form (5.16)associated with each Q ∈ Λ ♯♯ using at most CN log N operations.Lemma 5.8 is proved. The next lemma shows how to relay local information to the point x ♯ Q . Lemma 5.9.
Let Q ∈ Λ ♯ . Let x ♯ Q be as in Lemma 5.6. Let x ∈ E ∩ . Let A ( x ) be as in Lemma4.1. Let S ( A ( x )) be as in (4.3) . Then (5.17) σ ( x ♯ Q , S ( A ( x ))) ⊂ C · σ ♯ ( x ♯ Q , 16 ) . Proof.
Fix x as in the hypothesis. By our choice of x ♯ Q in Lemma 5.6, we have(5.18) (cid:12)(cid:12)(cid:12) x ♯ Q − x (cid:12)(cid:12)(cid:12) ≥ Cδ Q . Let P ∈ σ ( x ♯ Q , S ( A ( x ))) . By the definition of σ , there exists ϕ ∈ C ( R ) with k ϕ k C ( R ) ≤ , ϕ | S ( A ( x )) = , and J x ♯ Q ϕ = P . Set P := J x ϕ . Then P ∈ σ ( x, S ( A ( x ))) . x ∈ E , by Lemma 4.2, we have P ∈ σ ♯ ( x, 16 ) . Let S ⊂ E with ( S ) ≤ . By the definition of σ ♯ in (2.21) and Taylor’s theorem, there existsa Whitney field ~ P = ( P, ( P y ) y ∈ S ) ∈ W ( S ∪ { x } ) , with k ~ P k W ( S ∪ { x } ) ≤ C and P y ( y ) = for y ∈ S .Consider another Whitney field ~ P = ( P , ( P y ) y ∈ S ) ∈ W ( S ∪ { x ♯ Q } ) defined by replacing P by P in ~ P . By the classical Whitney Extension Theorem for finite sets, it suffices to show that ~ P satisfies(5.19) P y ( y ) = for y ∈ S, and(5.20) k ~ P k W ( S ∪ { x ♯ Q } ) ≤ C. Note that (5.19) is obvious by construction.We turn to (5.20).Since P = J x ♯ Q ϕ and P = J x ϕ , Taylor’s theorem implies(5.21) (cid:12)(cid:12)(cid:12) ∂ α ( P − P )( x ♯ Q ) (cid:12)(cid:12)(cid:12) , | ∂ α ( P − P )( x ) | ≤ C (cid:12)(cid:12)(cid:12) x − x ♯ Q (cid:12)(cid:12)(cid:12) − | α | for | α | ≤ Since the Whitney field ~ P = ( P, ( P y ) y ∈ S ) satisfies k ~ P k W ( S ∪ { x } ) ≤ C , we have(5.22) k ( P y ) y ∈ S k W ( S ) ≤ C, and(5.23) | ∂ α ( P − P y )( x ) | , | ∂ α ( P − P y )( y ) | ≤ C | x − y | − | α | for | α | ≤
2, y ∈ S. Applying the triangle inequality to (5.21) and (5.23), and using (5.18), we see that(5.24) (cid:12)(cid:12)(cid:12) ∂ α ( P − P y )( x ♯ Q ) (cid:12)(cid:12)(cid:12) , | ∂ α ( P − P y )( y ) | ≤ C (cid:12)(cid:12)(cid:12) x ♯ Q − y (cid:12)(cid:12)(cid:12) − | α | for | α | ≤ Moreover, since P ∈ σ ( x ♯ Q , S ( A ( x ))) , we have(5.25) (cid:12)(cid:12)(cid:12) ∂ α P ( x ♯ Q ) (cid:12)(cid:12)(cid:12) ≤ for | α | ≤ Then, (5.20) follows from (5.22), (5.24), and (5.25).Lemma 5.9 is proved.Let Q ∈ Λ ♯ with Λ ♯ as in (5.7). Let A ( x ) , x ∈ E be as in Lemma 4.1. Let S ( A ( x )) be as in (4.3).Let Rep ( Q ) be as in Lemma 5.2(C). Let x ♯ Q be as in Lemma 5.6. We set(5.26) S ♯ ( Q ) := S ( A ( Rep ( Q ))) ∪ { Rep ( Q ) } ∪ { x ♯ Q } . Note that x ♯ Q is not a point in E . 24 .3 Transition jets In this section, we want construct a map T Q : C + ( E ) × [ ∞ ) → P of bounded depth, such that T Q ( f, M ) ∈ Γ ♯ + ( x ♯ Q , 16, CM, f ) for all ( f, M ) ∈ C + ( E ) × [ ∞ ) with k f k C + ( E ) ≤ M . We will explainthe importance of Γ ♯ + ( x ♯ Q , 16, CM, f ) in Remark 5.5 towards the end of the section.Let S ⊂ E . As in (2.3) and (2.4), we consider the following functions, depending on the choiceof S : Q ♯ : W ( S ) → [ ∞ ) ~ P = ( P x ) x ∈ S → X x ∈ S | ∂ α P x ( x ) | + X x,y ∈ S ♯ ( Q ) x = y | α | ≤ | ∂ α ( P x − P y )( x ) || x − y | − | α | , (5.27)and M ♯ : W + ( S ) → [ ∞ ]( P x ) x ∈ S → P x ∈ S | ∇ P x | P x ( x ) if P x ( x ) ≥ for each x ∈ S ∞ if there exists x ∈ S ♯ ( Q ) such that P x ( x ) < 0 . (5.28)We adopt the conventions that = and a0 = ∞ for a > 0 .For the rest of the section, we fix Q ∈ Λ ♯ , with Λ ♯ as in (5.7). Let x ♯ Q be as in Lemma 5.6. Let S ♯ ( Q ) be as in (5.26). Recall from (5.26) that Rep ( Q ) ∈ S ♯ ( Q ) , with Rep as in Lemma 5.2(C).Let f ∈ C + ( E ) be given. We define A := (cid:14) ~ P ∈ W + ( S ♯ ( Q )) : ( ~ P, x ♯ Q ) ≡ and ( ~ P, x )( x ) = f ( x ) for x ∈ S ♯ ( Q ) ∩ E (cid:15) , and A := (cid:10) ~ P ∈ W + ( S ♯ ( Q ) ∩ E ) : ( ~ P, x )( x ) = f ( x ) for x ∈ S ♯ ( Q ) ∩ E (cid:11) . (5.29)We note that that A and A are affine subspaces of W ( S ♯ ( Q )) and W ( S ♯ ( Q ) ∩ E ) , respectively.They depend only on f | S ♯ ( Q ) ∩ E .Consider the following minimization problems.(M0) Let S = S ♯ ( Q ) in (5.27) and (5.28). Minimize Q ♯ + M ♯ over A .(M1) Let S = S ♯ ( Q ) ∩ E in (5.27) and (5.28). Minimize Q ♯ + M ♯ over A ,For ⋆ =
0, 1 , we say a Whitney field ~ P ∈ A ⋆ f is an approximate minimizer of (M ⋆ ) if• ( Q ♯ + M ♯ )( ~ P ) ≤ C · inf (cid:10) ( Q ♯ + M ♯ )( ~ P ′ ) : ~ P ′ ∈ A ⋆ f (cid:11) for some universal constant C . Remark . Recall from Section 2.2 that both (M0) and (M1) can be reformulated as convexquadratic programming problems with affine constraint, and are efficiently solvable [2]. Thus, wecan solve for an approximate minimizer of (M ⋆ ), ⋆ =
0, 1 , using at most C operations, since ( S ♯ ( Q )) is universally bounded. We call the approximate minimizers for (M0) and (M1) obtained this way ~ P ♯ and ~ P ♯ . Note that ~ P ♯ and ~ P ♯ , respectively, are uniquely determined by A and A .25 emma 5.10. Let Q ∈ Λ ♯ . Let x ♯ Q be as in Lemma 5.6. Let ( f, M ) ∈ C + ( E ) × [ ∞ ) with k f k C + ( E ) ≤ M . Let ~ P = ( P x ) x ∈ S ♯ ( Q ) ∩ E be an approximate minimizer of (M1) above. Let P Rep ( Q ) bethe polynomial associated with the point Rep ( Q ) , i.e., P Rep ( Q ) = ( ~ P, Rep ( Q )) , with Rep as in Lemma5.2(C). Let T Rep ( Q ) w be the Whitney extension operator associated with the singleton { Rep ( Q ) } as inLemma 2.1(B). Then J x ♯ Q ◦ T Rep ( Q ) w ( P Rep ( Q ) ) ∈ Γ + ( x ♯ Q , S ♯ ( Q ) ∩ E, CM, f ) . Proof.
Let ~ P be as in the hypothesis. Let P := J x ♯ Q ◦ T Rep ( Q ) w ( P Rep ( Q ) ) . We adjoin P to ~ P to form ~ P := (cid:16) P , ( P x ) x ∈ S ♯ ( Q ) ∩ E (cid:17) ∈ W ( S ♯ ( Q )) . Thanks to Lemma 2.1, it suffices to show that ~ P ∈ W + ( S ♯ ( Q )) and k ~ P k W + ( S ♯ ( Q )) ≤ CM .By Lemma 2.1(B), we see that T Rep ( Q ) w ( P Rep ( Q ) ) ∈ C + ( R ) with norm k T Rep ( Q ) w ( P Rep ( Q ) ) k C ( R ) ≤ CM . Therefore,(5.30) (cid:12)(cid:12)(cid:12) ∂ α P ( x ♯ Q ) (cid:12)(cid:12)(cid:12) ≤ CM for | α | ≤ and | ∇ P | ≤ q CMP ( x ♯ Q ) . Thus, ~ P ∈ W + ( S ♯ ( Q )) .Since ~ P is an approximate minimizer of (M1) and k f k C + ( E ) ≤ M , we have(5.31) k ~ P k W + ( S ♯ ( Q ) ∩ E ) ≤ CM.
For x ∈ S ♯ ( Q ) ∩ E , we have | ∂ α ( P x − P )( x ) | ≤ (cid:12)(cid:12)(cid:12) ∂ α ( P x − P Rep ( Q ) )( x ) (cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12) ∂ α ( P Rep ( Q ) − J x ♯ Q ◦ T Rep ( Q ) w ( P Rep ( Q ) ))( x ) (cid:12)(cid:12)(cid:12)(cid:12) . Using (5.31) to estimate the first term and Taylor’s theorem to estimate the second, we have(5.32) | ∂ α ( P x − P )( x ) | ≤ CM (cid:16) | x − Rep ( Q ) | + (cid:12)(cid:12)(cid:12) x ♯ Q − Rep ( Q ) (cid:12)(cid:12)(cid:12)(cid:17) − | α | ≤ CM (cid:12)(cid:12)(cid:12) x − x ♯ Q (cid:12)(cid:12)(cid:12) − | α | . For the last inequality, we use the fact that dist (cid:16) x ♯ Q , E (cid:17) ≥ cδ Q , thanks to Lemma 5.6.Applying Taylor’s theorem to (5.32), we have(5.33) (cid:12)(cid:12)(cid:12) ∂ α ( P x − P )( x ♯ Q ) (cid:12)(cid:12)(cid:12) ≤ CM (cid:12)(cid:12)(cid:12) x − x ♯ Q (cid:12)(cid:12)(cid:12) − | α | . Combining (5.30)–(5.33), we see that k ~ P k W + ( S ♯ ( Q )) ≤ CM . Lemma 5.10 is proved. Definition 5.3.
Let Q ∈ Λ ♯ . Let x ♯ Q be as in Lemma 5.6. We define T Q : C + ( E ) × [ ∞ ) → P by the following rule. Let ( f, M ) ∈ C + ( E ) × [ ∞ ) be given, and let (M0) and (M1) be as above.Let ~ P ♯ and ~ P ♯ be as in Remark 5.3. 26TQ-0) Suppose ~ P ♯ satisfies ( Q ♯ + M ♯ )( ~ P ♯ ) ≤ C T M , for some large universal constant C T . Then weset T Q ( f, M ) ≡ .(TQ-1) Otherwise, we set T Q ( f, M ) := J x ♯ Q ◦ T Rep ( Q ) w ( P ) . Here, P is the polynomial in ~ P ♯ associatedwith the point Rep ( Q ) , i.e., P := ( ~ P ♯ , Rep ( Q )) ; and T Rep ( Q ) w is the Whitney extensionoperator associated with the singleton { Rep ( Q ) } as in Lemma 2.1(B).It is clear that T Q has bounded depth, since ~ P ♯ and ~ P ♯ depend only on f | S ♯ ( Q ) ∩ E . Remark . Given Q ∈ Λ ♯ with Λ ♯ as in (5.7), x ♯ Q as in Lemma 5.6, S ♯ ( Q ) as in (5.26), and ( f, M ) ∈ C + ( E ) × [ ∞ ) with k f k C + ( E ) ≤ M , computing T Q ( f, M ) from the data above amountsto solving for approximate minimizers of (M0) and (M1). Thus, by Remark 5.3, we can compute T Q ( f, M ) from the data above using at most C operations.Recall the following perturbation lemma from [12]. Lemma 5.11 (variant of Lemmas 5.7 and 7.3 of [12]) . Let E ⊂ R be finite. Let Q ∈ Λ ♯ . Let x ♯ Q beas in Lemma 5.6. Let f ∈ C + ( E ) be given. Suppose Γ ♯ + ( x ♯ Q , 16, M, f ) = ∅ . The following are true.(A) There exists a number B > 0 exceeding a large universal constant such that the following holds.Suppose f ( x ) ≥ B Mδ for each x ∈ E ∩ . Then Γ ♯ + ( x ♯ Q , 16, M, f ) + M · σ ♯ ( x ♯ Q , 16 ) ⊂ Γ ♯ + ( x ♯ Q , 16, CM, f ) , for some universal constant C .(B) Let A > 0 . Suppose f ( x ) ≤ AMδ for some x ∈ E ∩ . Then ∈ Γ ♯ + ( x ♯ Q , 16, A ′ M, f ) . Here, A ′ depends only on A . The main lemma of this section is the following.
Lemma 5.12.
Let Q ∈ Λ ♯ with Λ ♯ as in (5.7) . Let x ♯ Q be as in Lemma 5.6. Let T Q be as inDefinition 5.3. Let ( f, M ) ∈ C + ( E ) × [ ∞ ) with k f k C + ( E ) ≤ M . Then T Q ( f, M ) ∈ Γ ♯ + ( x ♯ Q , 16, CM, f ) . Proof.
Since k f k C + ( E ) ≤ M , we have Γ ♯ + ( x ♯ Q , 16, CM, f ) = ∅ . Therefore, the hypotheses of Lemma5.11 are satisfied.Recall Definition 5.3.Suppose T Q ( f, M ) is defined in terms of (TQ-0).By Lemma 2.1, there exists F ∈ C + ( R ) with k F k C ( R ) ≤ CM , F | S ♯ ( Q ) ∩ E = f , and J x ♯ Q F ≡ .Recall from Lemma 5.2(C) and (5.26) that Rep ( Q ) ∈ S ♯ ( Q ) ∩ . Therefore, by Taylor’s theorem,we have f ( Rep ( Q )) = F ( Rep ( Q )) ≤ CMδ .
27y Lemma 5.11(B), we have T Q ( f, M ) ≡ ∈ Γ ♯ + ( x ♯ Q , 16, CM, f ) .Suppose T Q ( f, M ) is defined in terms of (TQ-1).For sufficiently large C T , Taylor’s theorem implies, with B as in Lemma 5.11, f ( x ) ≥ B Mδ for x ∈ E ∩ Thus, the hypothesis of Lemma 5.11(A) is satisfied.Since k f k C + ( E ) ≤ M , there exists ^ F ∈ C + ( R ) with k ^ F k C ( R ) ≤ CM, ^ F | E = f, and J x ♯ Q ^ F ∈ Γ + ( x ♯ Q , E, CM, f ) . By Lemma 5.10, we have T Q ( f, M ) ∈ Γ + ( x ♯ Q , S ♯ ( Q ) ∩ E, CM, f ) . Therefore, by Lemma 5.9, the definition of S ♯ ( Q ) in (5.26), and the definition of σ in (2.21), wehave J x ♯ Q ^ F − T Q ( f, M ) ∈ CM · σ ( x ♯ Q , S ♯ ( Q ) ∩ E ) ⊂ C ′ M · σ ♯ ( x ♯ Q , 16 ) . Thus, by Lemma 5.11(A) and the trivial inclusion Γ + ( x ♯ Q , E, M, f ) ⊂ Γ ♯ + ( x ♯ Q , 16, M, f ) , we have T Q ( f, M ) ∈ J x ♯ Q ^ F + CM · σ ♯ ( x ♯ Q , 16 ) ⊂ Γ ♯ + ( x ♯ Q , 16, CM, f ) + CMσ ♯ ( x ♯ Q , 16 ) ⊂ Γ ♯ + ( x ♯ Q , 16, C ′ M, f ) . Lemma 5.12 is proved.
Remark . We will not use Lemma 5.12 explicitly in this paper. However, jets in Γ ♯ + ( x ♯ Q , 16, M, f ) are crucial for the following reason:(5.34) (Lemma 5.3 of [12]) Suppose Q, Q ′ ∈ Λ , x ♯ Q and x ♯ Q ′ as in Lemma 5.6, P ∈ Γ ♯ + ( x ♯ Q , 16, M, f ) and P ′ ∈ Γ ♯ + ( x ♯ Q ′ , 16, M, f ) , then (cid:12)(cid:12)(cid:12) ∂ α ( P − P ′ )( x ♯ Q ) (cid:12)(cid:12)(cid:12) , (cid:12)(cid:12)(cid:12) ∂ α ( P − P ′ )( x ♯ Q ′ ) (cid:12)(cid:12)(cid:12) ≤ CM (cid:16) δ Q + δ Q ′ + (cid:12)(cid:12)(cid:12) x ♯ Q − x ♯ Q ′ (cid:12)(cid:12)(cid:12)(cid:17) − | α | for | α | ≤ We can then use (5.34) to control the derivatives when we patch together local extensions. See theproof of Theorem 1 in [11].
We write P , P + , respectively, to denote the collections of single-variable polynomials of degree nogreater than one, two. We write J t , J + t , respectively, to denote the one-jet, two-jet, of a singlevariable function at t ∈ R .We recall the following results proven in [12].28 heorem 4.A. Let E ⊂ R be a finite set with ( E ) = N . We think of C + ( E ) ≈ [ ∞ ) N . Thenthere exists a collection of maps (cid:10) Ξ t + : t ∈ R (cid:11) , where Ξ t + : C + ( E ) → P + for each t ∈ R , such thatthe following hold.(A) There exists a universal constant D such that for each t ∈ R , the map Ξ t + : C + ( E ) → P + isof depth D .(B) Let f ∈ C + ( E ) be given. Then there exists a function F ∈ C + ( R ) such that J + t F = Ξ t + ( f ) for all t ∈ R , k F k C ( R ) ≤ C k f k C + ( E ) , and F ( t ) = f ( t ) for t ∈ E. (C) There is an algorithm, that takes the given data, performs one-time work, and then respondsto queries.A query consists of a point t ∈ R , and the response to the query is the depth- D map Ξ t + , givenin its efficient representation.The one-time work takes CN log N operations and CN storage. The time to answer a query is C log N . Theorem 4.B.
Let E ⊂ R be a finite set with ( E ) = N . We think of C ( E ) ≈ R N . Thenthere exists a collection of maps (cid:10) Ξ t ± : t ∈ R (cid:11) , where Ξ t ± : C ( E ) → P + for each t ∈ R , such thatthe following hold.(A) There exists a universal constant D such that for each t ∈ R , the map Ξ t ± : C ( E ) → P + islinear and of depth D .(B) Let f ∈ C ( E ) be given. Then there exists a function F ∈ C ( R ) such that J + t F = Ξ t ± ( f ) for all t ∈ R , k F k C ( R ) ≤ C k f k C ( E ) , and F ( t ) = f ( t ) for t ∈ E. (C) There is an algorithm, that takes the given data, performs one-time work, and then respondsto queries.A query consists of a point t ∈ R , and the response to the query is the depth- D map Ξ t ± , givenits efficient representation.The one-time work takes CN log N operations and CN storage. The time to answer a query is C log N . The explanation for Theorems 4.A and 4.B without the complexity statements were given in [12].We repeat the explanations for completeness, and further elaborate on the complexity.Using at most CN log N operations and CN storage, we can sort E = { t , · · · , t N } with t < · · · < t N . Let us begin with Theorem 4.A.Suppose ( E ) ≤ . Let Q and M be as in (2.3) and (2.4), but with P instead of P . Let f : E → [ ∞ ) . Let ~ P be a section of E × P (i.e., a Whitney field in one-dimension) that29inimizes ( Q + M ) subject to the constraint ( ~ P , t )( t ) = f ( t ) for t ∈ E (see Section 2.2). Let T W be the one-dimensional counterpart of the operator in Lemma 2.1(B). Then F := T W ( ~ P ) ∈ C ( R ) with F ( t ) = f ( t ) and F ( t ) ≥ on R , thanks to Lemma 2.1(B). By the one-dimensional counterpartof Lemma 2.2, we have k F k C ( R ) ≤ C k f k C + ( E ) . Thus, we have constructed a bounded nonnegativeextension operator E : C + ( E ) → C + ( R ) if ( E ) ≤ . We can simply take the map Ξ t ( · ) in Theorem4.A(B) to be J + t ◦ E ( · ) .We have shown in Theorem 2.A. of [12] that there exists a bounded nonnegative extensionoperator E : C + ( E ) → C + ( R ) of bounded-depth in the form(5.35) E ( f )( t ) = N − X i = θ i ( t ) · E i ( f )( t ) , where• E ( i ) = { t i , t i + , t i + } ,• E i ( · ) : C + ( E ( i ) ) → C + ( R ) is the bounded nonnegative extension operator constructed in theprevious step, and• θ , θ , · · · , θ N − , θ N − form a nonnegative C partition of unity subordinate to the cover (− ∞ , t ) , ( t , t ) , · · · , ( t N − , t N − ) , ( t N − , ∞ ) , such that (cid:12)(cid:12)(cid:12)(cid:12) d m dt m θ i (^ t ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:14) C | t i + − t i | − m if ^ t ∈ ( t i , t i + ) C | t i + − t i + | − m if ^ t ∈ ( t i + , t i + ) , for i = · · · , N − Given t ∈ R and i ∈ { · · · , N − } , we can compute J + t θ i using at most C log N operations.Let t ∈ R be given. Note that t is supported by at most two of the θ i ’s. In C log N operations,we can find all i ′ , i ′′ ∈ { · · · , N − } (possibly i ′ = i ′′ ) such that t ∈ supp ( θ i ′ ) ∪ supp ( θ i ′′ ) . Itis a standard search algorithm and requires at most C log N operations, since E has been sorted.Finally, we simply set Ξ t ( · ) := J + t ◦ X i ∈ { i ′ ,i ′′ } θ i · E i ( · ) . It is clear from construction that Ξ t + ( · ) depends only on f | S ( t ) , where(5.36) S ( t ) := E if ( E ) ≤ three closest points in E closest to t if ( E ) > 3 and t / ∈ [ t , t N ] { t , t , t } if t ∈ [ t , t ] { t N − , t N − , t N } if t ∈ [ t N − , t N ] { t ′ , t ′ , t ′ , t ′ } ⊂ E with t ′ < t ′ ≤ t ≤ t ′ < t ′ otherwise . Theorem 4.A(A) then follows.Theorem 4.A(B) follows from the fact that the operator E in (5.35) is a bounded nonnegativeextension operator on C + ( E ) . 30heorem 4.A(C) follows from the discussions above on complexity.We have finished explaining Theorem 4.A.The explanation for Theorem 4.B is almost identical with some simplification, which we explainbelow.When constructing a bounded extension operator for C ( E ) with ( E ) ≤ , we use• the natural quadratic form associated with W ( E ) instead of ( Q + M ) ; and• the classical Whitney extension operator instead of T w in Lemma 2.1(B).See [4,9,10] for details and further discussion on linear extension operators without the nonnegativeconstraint.This concludes the explanation for Theorem 4.B. The main lemma of the section is the following.
Lemma 5.13.
Let Q ∈ Λ ♯♯ with Λ ♯♯ as in (5.6) . There exists a collection of maps { Ξ x,Q : x ∈ ( + c G ) Q } where Ξ x,Q : C + ( E ) × [ ∞ ) → P + for each x ∈ ( + c G ) Q , such that the following hold.(A) There exists a universal constant D such that for each x ∈ ( + c G ) Q , the map Ξ x,Q ( · , · ) : C + ( E ) → P + is of depth D .(B) Suppose we are given ( f, M ) ∈ C + ( E ) × [ ∞ ) with k f k C + ( E ) ≤ M . Then there exists a function F Q ∈ C + (( + c G ) Q ) such that(B1) J + x F Q = Ξ x,Q ( f, M ) for all x ∈ ( + c G ) Q ;(B2) k F Q k C (( + c G ) Q ) ≤ CM ;(B3) F Q ( x ) = f ( x ) for x ∈ E ∩ ( + c G ) Q ; and(B4) J x ♯ Q F Q ∈ Γ ♯ + ( x ♯ Q , 16, CM, f ) , with x ♯ Q as in Lemma 5.6 and Γ ♯ + as in (2.22) .(C) There is an algorithm, that takes ( E, f, M, Q ) as input, performs one-time work, and thenresponds to queries.A query consists of a point x ∈ ( + c G ) Q , and the response to the query is the depth- D map Ξ x,Q , given its efficient representation.The one-time work takes CN log N operations and CN storage. The time to answer a query is C log N .Proof. Repeating the argument of Lemma 3.8 of [11], we can show that there exists a map E Q : C + ( E ) × [ ∞ ) → C + (( + c G ) Q ) such that the following hold.(5.37) Given ( f, M ) ∈ C + ( E ) × [ ∞ ) with k f k C + ( E ) ≤ M , we have(a) E Q ( f, M ) ≥ on ( + c G ) Q ; 31b) E Q ( f, M )( x ) = f ( x ) for x ∈ E ∩ ( + c G ) Q ;(c) kE Q ( f, M ) k C (( + c G ) Q ) ≤ CM ; and(d) J x ♯ Q E Q ( f, M ) = T Q ( f, M ) , with x ♯ Q as in Lemma 5.6, T Q as in Definition 5.3.(5.38) For each x ∈ ( + c G ) Q , there exists a set S Q ( x ) ⊂ E with ( S Q ( x )) ≤ D for some universalconstant D , such that the following holds: Given ( f, M ) , ( g, M ) ∈ C + ( E ) × [ ∞ ) with k f k C + ( E ) , k g k C + ( E ) ≤ M and f | S Q ( x ) = g | S Q ( x ) , we have J + x E Q ( f, M ) = J + x E Q ( g, M ) .To prove Lemma 5.13, we need to dissect the operator E Q and analyze its complexity.As in Lemma 3.8 of [11], the operator E Q takes the following form: E Q ( f, M ) := T Q ( f, M ) + ( − ψ ) · e E Q ( f, M ) , where e E Q ( f, M ) := (cid:18) vertical extension z }| { V ◦ (cid:20) (cid:16) ∆ Qf,M E ± + ( − ∆ Qf,M ) E (cid:17) (cid:16) ( f − T Q ( f, M ) (cid:12)(cid:12) E ) ◦ Φ − (cid:12)(cid:12) R × { } (cid:17) | {z } straightening local data (cid:21) | {z } one-dimensional extension (cid:19) ◦ Φ . (5.39)Here, in the order of appearance in (5.39),• T Q is as in Definition 5.3;• ψ ∈ C + ( R ) with ψ ≡ near x ♯ Q (see Lemma 5.6), supp ( ψ ) ⊂ B ( x ♯ Q , c δ Q ) with c as inLemma 5.6, and | ∂ α ψ | ≤ Cδ − | α | Q ;• V is the vertical extension map V ( g )( t , t ) := g ( t ) , for g defined on a subset of R ;• ∆ Qf,M is an indicator function defined by ∆ Qf,M := (cid:14) if T Q ( f, M ) is not the zero polynomial otherwise ;• E and E ± , respectively, are the one-dimensional extension operators associated with Theorem4.A and Theorem 4.B (see also Theorems 2.A and 2.B of [12]);• Φ is the diffeomorphisms in Lemma 5.1(B).We bring ourselves back to the setting of Lemma 5.13. Recall the definition of J + x as in (2.1).We want to define the maps { Ξ x,Q : x ∈ ( + c G ) Q } by(5.40) Ξ x,Q := J + x ◦ E Q for x ∈ ( + c G ) Q. Lemma 5.13(A) follows from (5.38). Lemma 5.13(B) follows from (5.37).It remains to examine Lemma 5.13(C). Suppose we have performed the necessary one-time workusing at most CN log N operations and CN storage.Let x ∈ ( + c G ) Q be given. 32tep 1. We compute t x := Proj u ⊥ Q ( x − Rep ( Q )) . Here,•
Proj u ⊥ Q denotes orthogonal projection onto R u ⊥ Q ;• the pair (cid:10) u Q , u ⊥ Q (cid:11) is as in Lemma 5.5; and• Rep ( Q ) is as in Lemma 5.2(C).All the procedures involved in this step require at most C log N operations, thanks to Lemma5.2(B) and Lemma 5.5.Step 2. Let ρ be the rotation about the origin specified by e → u Q . We can compute J + x ρ .Step 3. Let u ⊥ Q and Proj u ⊥ Q be as in Step 1. We set E Q := Proj u ⊥ Q ( E ∩ ( + c G ) Q − Rep ( Q )) ⊂ R . Recall from Lemma 5.8 that we can compute the sorted list E Q for each Q ∈ Λ ♯♯ using atmost CN log N operations and CN storage.Let C + ( E Q ) and C ( E Q ) be the one-dimensional trace spaces. Note that we have sorted E Q . Let Ξ t x + and Ξ t x ± , respectively be the maps associated with C + ( E Q ) and C ( E Q ) , as inTheorems 4.A and 4.B.Step 4. Recall from Lemma 5.1(B) that the diffeomorphism Φ is defined in terms of a function ϕ ,satisfying (5.4) and (5.5). We compute J + t x ϕ , where J + t x is the single-variable two-jet at t x .We can accomplish this by simply setting J + t x ϕ := Ξ t x ± ( ϕ | E Q ) , with Ξ t x ± as in Theorem 4.B.Since we have already sorted the set E Q in Step 3, computing Ξ t x ± ( ϕ | E Q ) requires at most C log N operations.Step 5. Similar to Step 4, the query time for J + t x ◦ E ( · ) ∗ and J + t x ◦ E ± ( · ) is C log N , since set E Q hasbeen sorted in Step 3.Step 6. By Lemma 5.1(B), the diffeomorphism Φ = ( Φ , Φ ) and its inverse Φ − = ( Ψ , Ψ ) aregiven by Φ ◦ ρ ( t , t ) = ( t , t − ϕ ( t )) , and ρ − ◦ Φ − ( t ′ , t ′ ) = ( t ′ , t ′ + ϕ ( t ′ )) . Therefore, we can compute J + x Φ i and J + x Ψ i , i =
1, 2 , from the (single-variable) two-jet of ϕ .Step 7. We compute T Q ( f, M ) , as in Definition 5.3. Computing S ♯ ( Q ) as in (5.26) requires at most C log N operations, by Lemma 5.2(C) and Lemma 5.6. After that, we can compute T Q ( f, M ) in C operations. See Remark 5.4. ∗ Note that E ( · ) is only defined for ˜ f : E Q → [ ∞ ) Ξ x,Q in (5.40) via formula(5.39) using at most C log N operations. After that, given ( f, M ) ∈ C ( E ) × [ ∞ ) , we can compute Ξ x,Q ( f, M ) in C operations.This proves Lemma 5.13. Recall the definition of J + x as in (2.1).We can construct a partition of unity { θ Q : Q ∈ Λ } that satisfies the following properties:• θ ≥ ;• P Q ∈ Λ θ Q ≡ ;• supp ( θ Q ) ⊂ ( + c G /2 ) Q for each Q ∈ Λ ;• For each Q ∈ Λ , | ∂ α θ Q | ≤ Cδ − | α | Q for | α | ≤ ;• After one-time work using at most CN log N operations and CN storage, we can answer queriesas follows: Given x ∈ R and Q ∈ Λ , we return J + x θ Q . The time to answer query is C log N .See Section 28 of [10] for details. Proof of Theorem 2.
Slightly modifying the proof of Theorem 1 of [11], we can show that thereexists a map(5.41) E : C + ( E ) × [ ∞ ) → C + ( R ) such that the following hold.(5.42) Given ( f, M ) ∈ C + ( E ) × [ ∞ ) with k f k C + ( E ) ≤ M , we have(a) E ( f, M ) ≥ on R ;(b) E ( f, M )( x ) = f ( x ) for x ∈ E ; and(c) kE ( f, M ) k C ( R ) ≤ CM .(5.43) For each x ∈ R , there exists a set S ( x ) ⊂ E with ( S ( x )) ≤ D for some universal constant D ,such that the following holds: Given ( f, M ) , ( g, M ) ∈ C + ( E ) × [ ∞ ) with k f k C + ( E ) , k g k C + ( E ) ≤ M and f | S ( x ) = g | S ( x ) , we have J + x E ( f, M ) = J + x E ( g, M ) .Moreover, E takes the form of(5.44) E ( f, M )( x ) := X Q ∈ Λ θ Q ( x ) · E ♯ Q ( f, M )( x ) = X Q ∈ Λ ( x ) θ Q ( x ) · E ♯ Q ( f, M )( x ) , where 34 { θ Q : Q ∈ Λ } is the partition of unity constructed in Section 5.6;• Λ ( x ) is the set in Lemma 5.2(A); and• E ♯ Q is defined by the following rule. – Suppose Q ∈ Λ ♯♯ . Then E ♯ Q ( f, M ) := E Q ( f, M ) with E Q as in Lemma 5.13; – Suppose Q ∈ Λ ♯ \ Λ ♯♯ . Then E ♯ Q := T x ♯ Q w ◦ T Q , with T Q as in Definition 5.3, x ♯ Q as inLemma 5.6, and T x ♯ Q w as in Lemma 2.1(B) (associated with the singleton { x ♯ Q } ). – Suppose Q ∈ Λ empty . Then E ♯ Q := T x ♯ µ ( Q ) w ◦ T µ ( Q ) , with µ as in Lemma 5.4, x ♯ µ ( Q ) as inLemma 5.6, T µ ( Q ) as in Definition 5.3, and T x ♯ µ ( Q ) w as in Lemma 2.1(B) (associated withthe singleton { x ♯ µ ( Q ) } ). – Suppose Q ∈ Λ \ ( Λ ♯ ∪ Λ empty ) . Then E ♯ Q : ≡ .We set(5.45) Ξ x ( f, M ) := J + x ◦ E ( f, M ) = X Q ∈ Λ ( x ) J + x θ Q ⊙ + x J + x ◦ E ♯ Q ( f, M ) for x ∈ R . Theorem 2(A) follows from (5.42) and Theorem 2(B) follows from (5.43).We now turn to Theorem 2(C). Suppose we have performed the necessary one-time work usingat most CN log N operations and CN storage.By Lemma 5.2(A) and Section 5.6, we can compute Λ ( x ) and {J + x θ Q : Q ∈ Λ ( x ) } using at most C log N operations.By Lemma 5.13, we can compute (cid:10) J + x ◦ E Q ( f, M ) : Q ∈ Λ ♯♯ ∩ Λ ( x ) (cid:11) using at most C log N operations, after computing Λ ( x ) .By Lemma 5.6 and Remark 5.12, we can compute (cid:12) J + x ◦ T x ♯ Q w ◦ T Q ( f, M ) : Q ∈ Λ ( x ) ∩ ( Λ ♯ \ Λ ♯♯ ) (cid:13) using at most C log N operations, after computing Λ ( x ) .By Lemma 5.4, Lemma 5.6 and Remark 5.4, we can compute (cid:12) J + x ◦ T x ♯ µ ( Q ) w ◦ T µ ( Q ) ( f, M ) : Q ∈ Λ empty ∩ Λ ( x ) (cid:13) using at most C log N operations, after computing Λ ( x ) .Therefore, we can compute Ξ x in (5.45) using at most C log N operations. Given ( f, M ) ∈ C ( E ) × [ ∞ ) , we can compute Ξ x ( f, M ) in C operations. Theorem 2(C) follows.This proves Theorem 2. 35 Convex quadratic programming problem with affine constraint
Let d ≥ be an integer bounded by a universal constant. We use the standard dot product on R d and R . We use bold-faced letters to denote given quantities.We consider a general form of the minimization problem (2.20):(A.1) Minimize β t A β + d X i = | β i | subject to B β = b . Here, β = ( β , · · · , β d ) t ∈ R d is the optimization variable, A ∈ M d × d is a given positive semidefinite,and B is a given matrix of full rank, and b is a given vector.We will solve (A.1) by first augmenting the system (A.1) to remove the absolute values in the ob-jective function. For the augmented system, which is still convex, the solution can be found by solv-ing for a system of linear equalities and inequalities arising from its associated Karush–Kuhn–Tucker(KKT) conditions [2].We begin with the augmentation. Decomposing β into its positive and negative parts, β = β + − β − , i.e., β i + := ( β i + | β i | ) and β i − := β i + − β i , we arrive at the system:Minimize (cid:18) β + β − (cid:19) t (cid:18) A − A − A A (cid:19) (cid:18) β + β − (cid:19) + ( ) t (cid:18) β + β − (cid:19) Subject to (cid:0) B − B (cid:1) (cid:18) β + β − (cid:19) = b , and (cid:18) β + β − (cid:19) ≥ . (A.2)Note that in order for (A.1) and (A.2) to be equivalent, we have to include in (A.2) the additionalsign constraint(A.3) β i + β i − = for i = · · · , d ; or equivalently, for some I ⊂ { · · · , d } ,(A.4) e tk β + = for k ∈ I and e tk β − = for k ∈ { · · · , d } \ I. Here, { e k : k = · · · , d } is the standard basis for R d .For convenience, set ^ β := (cid:18) β + β − (cid:19) , ^ A := (cid:18) A − A − A A (cid:19) , and ^ B := (cid:0) B − B (cid:1) = ^ B ... ^ B j max . Let { ^ e i : i = · · · , 2d } be the standard basis for R .36he KKT conditions for (A.2) coupled with (A.4) for a fixed I ⊂ { · · · , d } are given by ^ A ^ β − X i = µ i ^ e i + j max X j = λ j ^ B tj + X k ∈ I ν k ^ e k + X k ∈ { ··· ,d }\ I ν k ^ e k + d = , ^ β ≥ , ^ B ^ β − b = j max , ^ e tk ^ β = for k ∈ I, ^ e tk + d ^ β = for k ∈ { · · · , d } \ I,µ i ≥ for i = · · · , 2d. X i = µ i (^ e ti ^ β ) = . (A.5)In the above, µ , · · · , µ , λ , · · · , λ j max , ν , · · · , ν d are multipliers, and ^ β is the primal optimizationvariable.Since the matrix ^ A is positive semidefinite, the primal problem in (A.2) is convex. The KKTconditions are necessary and sufficient for the solutions to be primal and dual optimal [2]. Hence,solving (A.2) coupled with (A.4) for a fixed I ⊂ { · · · , d } amounts to solving a bounded system(A.5) of linear inequalities. The latter can be achieved, for instance by the simplex method orelimination [2]. The number of operations involved is at most (doubly) exponential in systemsize, which is universally bounded. Therefore, we can solve (A.2) coupled with (A.4) for a fixed I ⊂ { · · · , d } using at most C operations.Finally, we can solve (A.1) using at most C operations by solving (A.2) coupled with (A.4) forevery I ⊂ { · · · , d } and compare the minimizers.It is very likely that one can solve (A.1) more efficiently with advanced techniques. Here wecontent ourselves with the elementary exposition above. We refer the readers to [2] for a moredetailed discussion on convex optimization. References [1] Edward Bierstone and Pierre D. Milman, C m -norms on finite sets and C m -extension criteria ,Duke Mathematical Journal (2007), no. 1, 1–18.[2] Stephan Boyd and Lieven Vandenberghe, Convex optimization , Cambridge University Press,2004.[3] Paul B. Callahan and S. Rao Kosaraju,
A decomposition of multidimensional point sets withapplications to k -nearest-neighbors and n -body potential fields , J. ACM (1995), 67–90.[4] Charles Fefferman, Interpolation and extrapolation of smooth functions by linear operators , Rev.Mat. Iberoam. (2005), no. 1, 313–348.[5] , The structure of linear extension operators for C m , Rev. Mat. Iberoam. (2007),no. 1, 269–280. 376] , Fitting a C m -smooth function to data III , Ann. of Math. (2) (2009), no. 1, 427–441.[7] Charles Fefferman and Arie Israel, Fitting smooth functions to data , CBMS Regional ConferenceSeries in Mathematics, American Mathematical Society, 2020.[8] Charles Fefferman, Arie Israel, and Garving K. Luli,
Interpolation of data by smooth non-negative functions , Rev. Mat. Iberoam. (2016), no. 1, 305—324.[9] Charles Fefferman and Bo’az Klartag, Fitting a C m -smooth function to data. I , Ann. of Math.(2) (2009), no. 1, 315–346.[10] , Fitting a C m -smooth function to data. II , Rev. Mat. Iberoam. (2009), no. 1, 49–273.[11] Fushuai Jiang and Garving K. Luli, C ( R ) nonnegative interpolation by bounded-depth opera-tors , Advances in Math.[12] , Nonnegative C ( R ) interpolation , Advances in Math. (2020), 107364.[13] Garving K. Luli, C m,ω extension by bounded-depth linear operators , Advances in Math.224