Polyline Simplification has Cubic Complexity
PPolyline Simplification has Cubic Complexity
Karl Bringmann ∗ Bhaskar Ray Chaudhury ∗† Abstract
In the classic polyline simplification problem we want to replace a given polygonalcurve P , consisting of n vertices, by a subsequence P (cid:48) of k vertices from P such thatthe polygonal curves P and P (cid:48) are as close as possible. Closeness is usually measured usingthe Hausdorff or Fr´echet distance. These distance measures can be applied globally , i.e., tothe whole curves P and P (cid:48) , or locally , i.e., to each simplified subcurve and the line segmentthat it was replaced with separately (and then taking the maximum). This gives rise tofour problem variants: Global-Hausdorff (known to be NP-hard), Local-Hausdorff (in time O ( n )), Global-Fr´echet (in time O ( kn )), and Local-Fr´echet (in time O ( n )).Our contribution is as follows. • Cubic time for all variants:
For Global-Fr´echet we design an algorithm running intime O ( n ). This shows that all three problems (Local-Hausdorff, Local-Fr´echet, andGlobal-Fr´echet) can be solved in cubic time. All these algorithms work over a generalmetric space such as ( R d , L p ), but the hidden constant depends on p and (linearly)on d . • Cubic conditional lower bound:
We provide evidence that in high dimensions cubictime is essentially optimal for all three problems (Local-Hausdorff, Local-Fr´echet, andGlobal-Fr´echet). Specifically, improving the cubic time to O ( n − (cid:15) poly( d )) for polylinesimplification over ( R d , L p ) for p = 1 would violate plausible conjectures. We obtainsimilar results for all p ∈ [1 , ∞ ) , p (cid:54) = 2.In total, in high dimensions and over general L p -norms we resolve the complexity ofpolyline simplification with respect to Local-Hausdorff, Local-Fr´echet, and Global-Fr´echet,by providing new algorithms and conditional lower bounds. ∗ Max Planck Institute for Informatics, Saarland Informatics Campus † Saarbruecken Graduate School of Computer Science a r X i v : . [ c s . C G ] O c t Introduction
We revisit the classic problem of polygonal line simplification, which is fundamental to com-putational geometry, computer graphics, and geographic information systems. The most fre-quently implemented and cited algorithms for curve simplification go back to the 70s (Douglasand Peucker [12]) and 80s (Imai and Iri [19]). These algorithms use the following standard formalization of curve simplification. A polygonal curve or polyline is given by a sequence P = (cid:104) v , v , . . . , v n (cid:105) of points v i ∈ R d , and represents the continuous curve walking along theline sequences v i v i +1 in order. Given a polyline P = (cid:104) v , v , . . . , v n (cid:105) and a number δ >
0, wewant to compute a subsequence P (cid:48) = (cid:104) P i , . . . , P i k − (cid:105) , with 0 = i < . . . < i k − = n , of minimallength k such that P and P (cid:48) have “distance” at most δ .Several distance measures have been used for the curve simplification problem. The mostgeneric distance measure on point sets A, B is the
Hausdorff distance . The (directed) Hausdorffdistance from A to B is the maximum over all a ∈ A of the distance from a to its closest pointin B . This is used on curves P, Q by applying it to the images of the curves in the ambientspace, i.e., to the union of all line segments v i v i +1 .However, the most popular distance measure for curves in computational geometry is the Fr´echet distance δ F . This is the minimal length of a leash connecting a dog to its owner as theycontinuously walk along the two polylines without backtracking. In comparison to Hausdorffdistance, it takes the ordering of the vertices along the curves into account, and thus bettercaptures an intuitive notion of distance among curves.For both of these distance measures δ ∗ ∈ { δ H , δ F } , we can apply them locally or globally in or-der to measure the distance between the original curve P and its simplification P (cid:48) . In the globalvariant, we simply consider the distance δ ∗ ( P, P (cid:48) ), i.e., we use the Hausdorff or Fr´echet distanceof P and P (cid:48) . In the local variant, we consider the distance max ≤ (cid:96) Given sets A , . . . , A k ⊆ { , } d of size n , determine whetherthere exist vectors a ∈ A , . . . , a k ∈ A k that are orthogonal, i.e., for each dimension j ∈ [ d ]there is a vector i ∈ [ k ] with a i [ j ] = 0. Hypothesis: For any k ≥ ε > O ( n k − ε ).Naively, k -OV can be solved in time ˆ O ( n k ), and the hypothesis asserts that no polynomialimprovement is possible, at least not with polynomial dependence on d . See [2] for the fastestknown algorithms for k -OV.Buchin et al. [7] used the 2-OV hypothesis to rule out ˆ O ( n − ε )-time algorithms for Local-Hausdorff in the L , L , and L ∞ norm. This yields a tight bound for L ∞ , since an ˆ O ( n )-timealgorithm is known [6]. However, for all other L p -norms ( p ∈ [1 , ∞ )), the question remained openwhether ˆ O ( n − ε )-time algorithms exist. To answer this question, one could try to generalize theconditional lower bound by Buchin et al. [7] to start from 3-OV. However, curve simplificationproblems seem to have the wrong “quantifier structure” for such a reduction. See Section 1.3below for more intuition. For similar reasons, Abboud et al. [3] introduced the Hitting Sethypothesis, in which they essentially consider a variant of 2-OV where we have a universalquantifier over the first set of vectors and an existential quantifier over the second one ( ∀∃ -OV). From their hypothesis, however, it is not known how to prove higher lower bounds thanquadratic. We therefore consider the following natural extension of their hypothesis. Thisproblem was studied in a more general context by Gao et al. [15]. ∀∀∃ -OV Hypothesis: Problem: Given sets A, B, C ⊆ { , } d of size n , determine whether forall a ∈ A, b ∈ B there exists c ∈ C such that a, b, c, are orthogonal. Hypothesis: For any ε > O ( n − ε ). Their proof can be adapted to also work for Local-Fr´echet and Global-Fr´echet. 2o algorithm violating this hypothesis is known, and even for much stronger hypotheseson variants of k -OV and Satisfiability no such algorithms are known, see Section 5 for details.This shows that the hypothesis is plausible, in addition to being a natural generalization of thehypothesis of Abboud et al. [3].We establish a ∀∀∃ -OV-based lower bound for curve simplification. Theorem 1.2 (Section 4) . Over ( R d , L p ) for any p ∈ [1 , ∞ ) with p (cid:54) = 2 , Local-Hausdorff,Local-Fr´echet, and Global-Fr´echet simplification have no ˆ O ( n − ε ) -time algorithm for any ε > ,unless the ∀∀∃ -OV hypothesis fails. In particular, this rules out improving the 2 O ( d ) n -time algorithm for Local-Hausdorff over L [6] to a polynomial dependence on d . Note that the theorem statement excludes two in-teresting values for p , namely ∞ and 2. For p = ∞ , an ˆ O ( n )-time algorithm is known forLocal-Hausdorff [6], so proving the above theorem also for p = ∞ would immediately yield analgorithm breaking the ∀∀∃ -OV hypothesis. For p = 2, we do not have such a strong reasonwhy it is excluded, however, we argue in Section 1.3 that at least a significantly different proofwould be necessary in this case. This leaves open the possibility of a faster curve simplificationalgorithm for L , but such a result would need to exploit the Euclidean norm very heavily. Algorithm We first sketch the algorithm by Imai and Iri [19] for Local-Hausdorff. Given apolyline P = (cid:104) v , . . . , v n (cid:105) and a distance threshold δ , for all i < i (cid:48) we compute the Hausdorffdistance δ i,i (cid:48) from the subpolyline P [ i . . . i (cid:48) ] to the line segment v i v i (cid:48) . This takes total time O ( n ), since Hausdorff distance between a polyline and a line segment can be computed inlinear time. We build a directed graph on vertices { , , . . . , n } , with a directed edge from i to i (cid:48) if and only if δ i,i (cid:48) ≤ δ . We then determine the shortest path from 0 to n in this graph.This yields the simplification P (cid:48) of smallest size, with Local-Hausdorff distance at most δ . Therunning time is dominated by the first step, and is thus O ( n ). Replacing Hausdorff by Fr´echetdistance yields an O ( n )-time algorithm for Local-Fr´echet.Note that these algorithms are simple dynamic programming solutions. For Global-Fr´echet,our cubic time algorithm also uses dynamic programming, but is significantly more complicated.In our algorithm, we compute the same dynamic programming table as the previously bestalgorithm [22]. This is a table of size O ( k ∗ · n ), where k ∗ is the output size. Table entryDP( k, i, j ) stores the earliest reachable point on the line segment v j v j +1 with a size- k simplifi-cation of P [0 . . . i ]. More precisely, DP( k, i, j ) is the minimal t , with j ≤ t ≤ j + 1, such thatthere is a size- k simplification P (cid:48) of P [0 . . . i ] with δ F ( P (cid:48) , P [0 . . . t ]) ≤ δ . If such a point doesnot exist, we set DP( k, i, j ) = ∞ .A simple algorithm computes a table entry in time O ( n ): We iterate over all possiblesecond-to-last points v i (cid:48) of the simplification P (cid:48) , and over all possible previous line segments v j (cid:48) v j (cid:48) +1 , and check whether from i (cid:48) on P (cid:48) and DP( k − , i (cid:48) , j (cid:48) ) on P we can “walk” to i on P (cid:48) andsome j ≤ t ≤ j + 1 and P , always staying within the required distance. Moreover, we computethe earliest such t . This can be done in time O ( n ), which in total yields time O ( k ∗ n ). Thisis the algorithm from [22].In order to obtain a speedup, we split the above procedure into two types: j (cid:48) = j , i.e., thewalks “coming from the left”, and j (cid:48) < j , i.e., the walk “coming from the bottom”. For thefirst type, it can be seen that the simple algorithm computes their contribution to the outputin time O ( n ). Moreover, it is easy to bring down this running time to O (1) per table entry, bymaintaining a certain minimum.We show how to handle the second type in total time O ( n ). This is the bulk of effortgoing into our new algorithm. Here, the main observation is that the particular values ofDP( k − , i (cid:48) , j (cid:48) ) are irrelevant, and in particular we only need to store for each i (cid:48) , j (cid:48) the smallest k (cid:48) such that DP( k (cid:48) , i (cid:48) , j (cid:48) ) (cid:54) = ∞ . Using this observation, and further massaging the problem, we3rrive at the following subproblem that we call Cell Reachability : We are given n squares (or cells ) numbered 1 , . . . , n and stacked on top of each other. Between cell j and cell j + 1 there isa passage , which is an interval on their common boundary through which we can pass from j to j + 1. Finally, we are given an integral entry-cost λ j for each cell j . The goal is to compute, foreach cell j , its exit-cost µ j , defined as the minimal entry-cost λ j (cid:48) , j (cid:48) < j , such that we can walkfrom cell j (cid:48) to cell j through the contiguous passages in a monotone fashion (i.e., the points atwhich we cross a passage are monotonically non-decreasing). See Figure 4 for an illustration ofthis problem.To solve Cell Reachability, we determine for each cell j and cost k the leftmost point t j ( k )on the passage from cell j − j at which we can arrive from some cell j (cid:48) < j withentry-cost at most k (using a monotone path). Among the sequence t j (1) , t j (2) , . . . we onlyneed to store the break-points, with t j ( k ) < t j ( k − O (1) per cell j . This yields an O ( n )-time solution to CellReachability, which translates to an O ( n )-time solution to Global-Fr´echet simplification. Conditional lower bound Let us first briefly sketch the previous conditional lower bound byBuchin et al. [7]. Given a 2-OV instance on vectors A, B ⊆ { , } d , they construct correspondingpoint sets ˜ A, ˜ B ⊂ R d (cid:48) (for some d (cid:48) = O ( d )), forming two clusters that are very far apart fromeach other. They also add a start- and an endpoint, which can be chosen far away from theseclusters (in a new direction). Near the midpoint between ˜ A and ˜ B , another set of points ˜ C isconstructed. The final curve then starts in the startpoint, walks through all points in ˜ A , thenthrough all points in ˜ C , then through all points in ˜ B , and ends in the endpoint. This setupensures that any reasonable size-4 simplification must consist of the startpoint, one point ˜ a ∈ ˜ A ,one point ˜ b ∈ ˜ B , and the endpoint. All points in ˜ A are close to ˜ a , so they are immediatelyclose to the simplification, similarly for ˜ B . Thus, the constraints are in the points ˜ C . Buchinet al. [7] construct ˜ C such that it contains one point for each dimension (cid:96) ∈ [ d ], which “checks”that the vectors corresponding to the chosen points ˜ a, ˜ b are orthogonal in dimension (cid:96) , i.e., oneof a or b has a 0 in dimension (cid:96) .We instead want to reduce from ∀∀∃ -OV, so we are given an instance A, B, C and wantto know whether for all a ∈ A, b ∈ B there exists c ∈ C such that a, b, c are orthogonal. Inour adapted setup, the set ˜ C is in one-to-one correspondence to the set of vectors C . Thatis, choosing a size-4 simplification implements an existential quantifier over a ∈ A, b ∈ B , andthe contraints that all ˜ c ∈ ˜ C are close to the line segment from ˜ a to ˜ b implements a universalquantifier over c ∈ C . Naturally, we want the distance from ˜ c to the line segment ˜ a ˜ b to be largeif a, b, c are orthogonal, and to be small otherwise. This simulates the negation of ∀∀∃ -OV, soany curve simplification algorithm can be turned into an algorithm for ∀∀∃ -OV.The restriction p ∈ [1 , ∞ ) with p (cid:54) = 2 in Theorem 1.2 already is a hint that the specificconstruction of points is subtle. Indeed, let us sketch one critical issue in the following. Wewant the points ˜ C to lie in the middle between ˜ A and ˜ B , which essentially means that we wantto consider the distance from (˜ a + ˜ b ) / c . Now consider just a single dimension. Then ourtask boils down to constructing points a , a and b , b and c , c , corresponding to the bits inthis dimension, such that (cid:107) ( a i + b j ) / − c k (cid:107) p = β if i = j = k = 1 and β otherwise, with4 < β . Writing a (cid:48) i = a i / b (cid:48) j = b j / p = 2 we can simplify (cid:107) a (cid:48) i + b (cid:48) j − c k (cid:107) = d (cid:48) (cid:88) (cid:96) =1 ( a (cid:48) i [ (cid:96) ] + b (cid:48) j [ (cid:96) ] − c k [ (cid:96) ]) = d (cid:48) (cid:88) (cid:96) =1 (cid:16) ( a (cid:48) i [ (cid:96) ] + b (cid:48) j [ (cid:96) ]) + ( a (cid:48) i [ (cid:96) ] − c k [ (cid:96) ]) + ( b (cid:48) j [ (cid:96) ] − c k [ (cid:96) ]) − a (cid:48) i [ (cid:96) ] − b (cid:48) j [ (cid:96) ] − c k [ (cid:96) ] (cid:17) = (cid:107) a (cid:48) i + b (cid:48) j (cid:107) + (cid:107) a (cid:48) i − c k (cid:107) + (cid:107) b (cid:48) j − c k (cid:107) − (cid:107) a (cid:48) i (cid:107) − (cid:107) b (cid:48) j (cid:107) − (cid:107) c k (cid:107) = f ( i, j ) + f ( j, k ) + f ( i, k ) , (1)for some functions f , f , f : { , } × { , } → R . Note that by assumption this is equal to β if i = j = k = 1 and β otherwise, with β < β . After a linear transformation, we thus obtaina representation of the form (1) for the function f ( i, j, k ) = i · j · k for i, j, k ∈ { , } . However,it can be checked that such a representation is impossible . Therefore, for p = 2 our outlinedreduction cannot work - provably! We nevertheless make this reduction work in the cases p ∈ [1 , ∞ ), p (cid:54) = 2. The aboveargument shows that the construction is necessarily subtle. Indeed, constructing the rightpoints requires some technical effort, see Section 4. Curve simplification has been studied in a variety of different formulations and settings, and itis well beyond the scope of this paper to give an overview. To list some examples, it was shownthat the classic heuristic algorithm by Douglas and Peucker [12] can be implemented in time O ( n log n ) [18], and that the classic O ( n )-time algorithm for Local-Hausdorff simplification byImai and Iri [19] can be implemented in time O ( n ) in two dimensions [9, 21]. Further topicsinclude curve simplification without self-intersections [11], Local-Hausdorff simplification withadditional constraints on the angles between consecutive line segments [10], approximationalgorithms [4], streaming algorithms [1], and the use of curve simplification in subdivisionalgorithms [17, 13, 14]. In Section 2 we formally define the problems studied in this paper. In Section 3 we present ournew algorithm for Global-Fr´echet simplification, and in Section 4 we show our conditional lowerbounds. We further discuss the used hypothesis in Section 5. Our ambient space is the metric space ( R d , L p ), where the distance between points x, y ∈ R d isthe L p -norm of their difference, i.e., (cid:107) x − y (cid:107) p = (cid:0) (cid:80) di =1 ( x [ i ] − y [ i ]) p (cid:1) /p . A polyline P of size n is given by a sequence of points (cid:104) v , v , . . . , v n (cid:105) , where each v i lies in the ambient space. Weassociate with P the continuous curve that starts in v , walks along the line segments v i v i +1 for i = 0 , . . . , n − v n . We also interpret P as a function P : [0 , n ] → R d where P [ i + λ ] = (1 − λ ) v i + λv i +1 for any λ ∈ [0 , 1] and i ∈ { , . . . , n − } . We use the notation P [ t . . . t ] to represent the sub-polyline of P between P [ t ] and P [ t ]. Formally for any integers0 ≤ i ≤ j ≤ n and reals λ ∈ [0 , 1) and λ ∈ (0 , P [ i + λ . . . j + λ ] = (cid:104) (1 − λ ) v i + λ v i +1 , v i +1 , . . . , v j , (1 − λ ) v j + λ v j +1 (cid:105) This holds for f ( i, j ) := (cid:107) a (cid:48) i + b (cid:48) j (cid:107) − (cid:107) a (cid:48) i (cid:107) , f ( j, k ) := (cid:107) b (cid:48) j − c k (cid:107) − (cid:107) b (cid:48) j (cid:107) , and f ( i, k ) := (cid:107) a (cid:48) i − c k (cid:107) − (cid:107) c k (cid:107) . For instance, we can express this situation by a linear system of equations in 12 variables (the 4 image valuesfor each function f i ) and 8 equations (for the values of f on i, j, k ∈ { , } ) and verify that it has no solution. simplification of P is a curve Q = (cid:104) v i , v i . . . , v i m (cid:105) with 0 = i < i < . . . < i m = n .The size of the simplification Q is m + 1. Our goal is to determine a simplification of given size k that “very closely” represents P . To this end we define two popular measures of similaritybetween the curves, namely the Fr´echet and Hausdorff distances. Definition 2.1 (Fr´echet distance) . The (continuous) Fr´echet distance δ F ( P , P ) between twocurves P and P of size n and m respectively is δ F ( P , P ) = inf f max t ∈ [0 ,n ] (cid:107) P [ t ] − P [ f ( t )] (cid:107) p where f : [0 , n ] → [0 , m ] is monotone with f (0) = 0 and f ( n ) = m . Alt and Godau [5] gave the characterization of the Fr´echet distance in terms of the so-calledfree-space diagram. Definition 2.2 (Free-Space) . Given two curves P , P and δ ≥ , the free-space FS δ ( P , P ) ⊆ R is the set { ( x, y ) ∈ ([0 , n ] × [0 , m ]) | (cid:107) P [ x ] − P [ y ] (cid:107) p ≤ δ } . Consider the following decision problem. Given two curves P , P of size n and m , respec-tively, and given δ ≥ 0, decide whether δ F ( P , P ) ≤ δ . The answer to this question is yesif and only if ( n, m ) is reachable from (0 , 0) by a monotone path through FS δ ( P , P ). This“reachability” problem is known to be solvable by a dynamic programming algorithm in time O ( nm ), and the standard algorithm for computing the Fr´echet distance is an adaptation ofthis decision algorithm [5]. In particular, if either P or P is a line segment, then the decisionproblem can be solved in linear time.The Hausdorff distance between curves ignores the ordering of the points along the curve.Intuitively, if we remove the monotonicity condition from function f in Definition 2.1 we obtainthe directed Hausdorff distance between the curves. Formally, it is defined as follows. Definition 2.3 (Hausdorff distance) . The (directed) Hausdorff distance δ H ( P , P ) betweencurves P and P of size n and m , respectively, is δ H ( P , P ) = max t ∈ [0 ,n ] min t ∈ [0 ,m ] (cid:107) P [ t ] − P [ t ] (cid:107) p . In order to measure the “closeness” between a curve and its simplification, these abovesimilarity measures can be applied either globally to the whole curve and its simplification,or locally to each simplified subcurve P [ i (cid:96) . . . i (cid:96) +1 ] and the segment v i (cid:96) , v i (cid:96) +1 to which it wassimplified (taking the maximum over all (cid:96) ). This gives rise to the following measures for curvesimplification. Definition 2.4 (Similarity for Curve Simplification) . Given a curve P = (cid:104) v , v , . . . , v n (cid:105) and asimplification Q = (cid:104) v i , v i . . . , v i m (cid:105) of P , we define their • Global-Hausdorff distance as δ H ( P, Q ) , • Global-Fr´echet distance as δ F ( P, Q ) , • Local-Hausdorff distance as max ≤ (cid:96) ≤ m − δ H ( P [ i (cid:96) . . . i (cid:96) +1 ] , v i (cid:96) v i (cid:96) +1 ) , and • Local-Fr´echet distance as max ≤ (cid:96) ≤ m − δ F ( P [ i (cid:96) . . . i (cid:96) +1 ] , v i (cid:96) v i (cid:96) +1 ) . It can be checked that in this expression directed and undirected Hausdorff distance have the same value, andso for Local-Hausdorff we can without loss of generality use the directed Hausdorff distance. For Global-Hausdorffthis choice makes a difference, but we do not consider this problem in this paper. i,j tt i,j t (cid:48) DP ( k − , i k − , j (cid:48) ) v i k − v i k s i,j tt i,j t (cid:48) DP ( k − , i k − , j (cid:48) ) v i k − v i k Figure 1: Illustration of the proof of Lemma 3.2. There exists a monotone path from (0 , t (cid:48) )to (1 , t ) in FS δ ( P, v i k − v i k ) (left). Since DP( k − , i k − , j (cid:48) ) ≤ t (cid:48) ≤ t , there is a monotonepath in FS δ ( P, v i k − v i k ) from (0 , DP( k − , i k − , j (cid:48) )) (right) to (1 , t ) by moving from (0 , DP( k − , i k − , j (cid:48) )) to (0 , t (cid:48) ) and then following the existing monotone path from (0 , t (cid:48) ) to (1 , t ). In this section we present an O ( n ) time algorithm for curve simplification under Global-Fr´echetdistance, i.e., we prove Theorem 1.1. O ( kn ) algorithm for Global Fr´echet simplification We start by describing the previously best algorithm by [22]. Let P be the polyline (cid:104) v , v , . . . v n (cid:105) .Let DP( k, i, j ) represent the earliest reachable point on v j v j +1 with a length k simplificationof the polyline P [0 . . . i ] i.e. DP( k, i, j ) represents the smallest t such that P [ t ] lies on the line-segment v j v j +1 (i.e. j ≤ t ≤ j + 1) and there is a simplification ˜ Q of the polyline P [0 . . . i ]of size at most k such that δ F ( ˜ Q, P [0 . . . t ]) ≤ δ . If such a point does not exist then we setDP( k, i, j ) = ∞ . To solve Global-Fr´echet simplification, we need to return the minimum k suchthat DP( k, n, n − (cid:54) = ∞ . Let P [ t i,j ] and P [ s i,j ] be the first point and the last point respectivelyon the line segment v j v j +1 such that (cid:107) v i − P [ t i,j ] (cid:107) p ≤ δ and (cid:107) v i − P [ s i,j ] (cid:107) p ≤ δ . Observe thatif DP( k, i, j ) (cid:54) = ∞ then t i,j ≤ DP( k, i, j ) ≤ s i,j for all k . Before moving onto the algorithm wemake some simple observations, Observation 3.1. If DP( k, i, j ) = ∞ then DP( k (cid:48) , i, j ) = ∞ for all k (cid:48) < k . If DP( k, i, j ) = t i,j then DP( k (cid:48) , i, j ) = t i,j for all k (cid:48) ≥ k .Proof. If k (cid:48) < k , then the minimization in DP( k, i, j ) is over a superset compared to DP( k (cid:48) , i, j ).Thus DP( k (cid:48) , i, j ) ≥ DP( k, i, j ) = ∞ . Thus DP( k, i, j ) = ∞ . Similarly when k (cid:48) ≥ k then theminimization in DP( k (cid:48) , i, j ) is over a superset compared to DP( k, i, j ). Thus we have t i,j ≤ DP( k (cid:48) , i, j ) ≤ DP( k, i, j ). Thus DP( k, i, j ) = t i,j implies DP( k (cid:48) , i, j ) = t i,j for all k (cid:48) ≥ k .We will crucially make use of the following characterization of the DP table entries, Lemma 3.2. DP( k, i, j ) is the minimal t ∈ [ t i,j , s i,j ] , such that for some i (cid:48) < i and j (cid:48) ≤ j , wehave DP( k − , i (cid:48) , j (cid:48) ) (cid:54) = ∞ and δ F ( P [DP( k − , i (cid:48) , j (cid:48) ) . . . t ] , v i (cid:48) v i ) ≤ δ . If no such t exists then DP( k, i, j ) = ∞ . roof. Let t be minimal in [ t i,j , s i,j ] such that DP( k − , i (cid:48) , j (cid:48) ) (cid:54) = ∞ and δ F ( P [DP( k − , i (cid:48) , j (cid:48) ) . . . t ] ,v i (cid:48) v i ) ≤ δ for some i (cid:48) < i and j (cid:48) ≤ j . Since in particular DP( k − , i (cid:48) , j (cid:48) ) (cid:54) = ∞ , for one directionwe note that there exists a simplification ˆ Q of the polyline P [0 . . . i (cid:48) ] of size k − δ F ( ˆ Q, P [0 . . . DP( k − , i (cid:48) , j (cid:48) )]) ≤ δ . By appending v j to ˆ Q we obtain a simplification ˜ Q of thepolyline P [0 . . . i ] such that δ F ( ˜ Q, P [0 . . . ˆ t ]) ≤ max( δ F ( ˆ Q, P [0 . . . DP( k − , i (cid:48) , j (cid:48) )]) , δ F ( P [DP( k − , i (cid:48) , j (cid:48) ) . . . t ] , v i (cid:48) v i )) ≤ δ . It follows that DP( k, i, j ) ≤ t . In particular if DP( k, i, j ) = ∞ then nosuch t exists.For the other direction, let t (cid:48) be such that DP( k, i, j ) = t (cid:48) . Assume t (cid:48) (cid:54) = ∞ . Then there existsa simplification ˜ Q = (cid:104) v i , v i , . . . v i k − , v i k (cid:105) of the Polyline P [0 . . . i ] such that δ F ( ˜ Q, P [0 . . . t (cid:48) ]) ≤∞ . Such a ˜ Q exists if and only if there is a simplification ˆ Q of size k − P = P [0 . . . i k − ] and a value ˆ t ≤ t (cid:48) such that,(1) δ F ( ˆ Q, P [0 . . . ˆ t ]) ≤ δ and(2) δ F ( P [ˆ t . . . t (cid:48) ] , v i k − v i k ) ≤ δ .Let ˆ j ≤ ˆ t ≤ ˆ j + 1. Observe that (1) implies that DP( k − , i k − , ˆ j ) (cid:54) = ∞ . Also t i k − , ˆ j ≤ DP( k − , i k − , ˆ j ) ≤ ˆ t ≤ s i k − , ˆ j . Now we show that δ F ( P [ˆ t . . . t (cid:48) ] , v i k − v i k ) ≤ δ implies that δ F ( P [DP( k − , i k − , ˆ j ) . . . t (cid:48) ] , v i k − v i k ) ≤ δ . This is obvious from inspecting FS δ ( P, v i k − v i k )(see Figure 1). There exists a monotone path in FS δ ( P, v i k − v i k ) that starts from (0 , DP( k − , i k − , ˆ j )) , moves to (0 , ˆ t ) and then follows the monotone path from (0 , ˆ t ) to (1 , t (cid:48) ) that exists.Therefore t ≤ t (cid:48) = DP( k, i, j ). Combining the two inequalities we have that DP( k, i, j ) = t .A dynamic programming algorithm follows more or less directly from Lemma 3.2. Note thatfor a fixed i (cid:48) < i and j (cid:48) ≤ j such that DP( k − , i (cid:48) , j (cid:48) ) (cid:54) = ∞ we can determine the minimal t such that (1 , t ) is reachable from (0 , DP( k − , i (cid:48) , j (cid:48) )) by a monotone path in FS δ ( P, v i (cid:48) v i ) in O ( n ) time. This follows from the standard algorithm for the decision version of the Fr´echetdistance between two polygonal curves of length at most n (in particular here one of the curvesis of length 1). To determine DP( k, i, j ) we enumerate over all i (cid:48) < i and j (cid:48) ≤ j such thatDP( k − , i (cid:48) , j (cid:48) ) (cid:54) = ∞ and determine the minimum t that is reachable. The running time todetermine DP( k, i, j ) is thus O ( n ) by the loops for i (cid:48) , j (cid:48) and the Fr´echet distance check. Sincethere are O ( kn ) DP-cells to fill, the algorithm runs in total time O ( kn ) and uses space O ( kn ). O ( n ) algorithm for Global-Fr´echet simplification Now we improve the running time by a more careful understanding of the monotone pathsthrough FS δ ( P, v i (cid:48) v i ) to (1 , DP( k, i, j )) for fixed i, j and i (cid:48) . Let fbox j denote the intersectionof the free-space FS δ ( P, v i (cid:48) v i ) with the square with corner vertices (0 , j ) and (1 , j + 1). Thefollowing fact will be useful later. Fact 3.3. fbox j is convex for all j ∈ [ n − .Proof. Alt and Godau [5] showed that fbox j is an affine transformation of the unit ball, andthis is convex for any L p norm.Furthermore let ver j denote the free space along vertical line segment with endpoints (0 , j )and (0 , j + 1) and let hor j denote the free space along the horizontal line segment (0 , j ) to (1 , j )in the free space FS δ ( P, v i (cid:48) v i ). We consider the point (0 , j ) to belong to ver j , but not hor j ,to avoid certain corner cases. We split the monotone paths from (0 , DP( k − , i (cid:48) , j (cid:48) )) for i (cid:48) < i and j (cid:48) ≤ j to (1 , DP( k, i, j )) in FS δ ( P, v i (cid:48) v i ) into two categories: ones that intersect ver j andthe ones that intersect hor j . We first look at the monotone paths that intersect ver j . Observethat if the monotone path intersects ver j then j (cid:48) = j . Let DP ( k, i, j ) = min i (cid:48) s i,j t i,j DP ( k − , ˆ i, j ) v ˆ i v i s i,j tt i,j DP ( k − , ˆ i, j ) v ˆ i v i s i,j t i,j , tDP ( k − , ˆ i, j ) v ˆ i v i Figure 2: Illustration of the proof of Observation 3.4. For some ˆ i < i , t ∈ [ t i,j , s i,j ] is minimalsuch that (1 , t ) is reachable from (0 , DP( k − , ˆ i, j )) by a monotone path in fbox j . If DP( k − , ˆ i, j ) > s i,j (left) then no such t exists. If t i,j ≤ DP( k − , ˆ i, j ) ≤ s i,j (middle) then t =DP( k − , ˆ i, j ). If DP( k − , ˆ i, j ) < t i,j (right) then t = t i,j .DP ( k, i, j ) = (cid:26) max(DP ( k, i, j ) , t i,j ) if DP ( k, i, j ) ≤ s i,j ∞ otherwiseWe show a characterization of DP similar to the characterization of DP in Lemma 3.2 andthus establishing that DP correctly handles all paths intersecting ver j . Observation 3.4. DP ( k, i, j ) is the minimal t ∈ [ t i,j , s i,j ] such that DP( k − , i (cid:48) , j ) (cid:54) = ∞ and δ F ( P [DP( k − , i (cid:48) , j ) . . . t ] , v i (cid:48) v i ) ≤ δ for some i (cid:48) < i . If no such t exists then DP ( k, i, j ) = ∞ .Proof. Fix ˆ i < i . First note that if there is a monotone path connecting (0 , DP( k − , ˆ i, j )) to(1 , t ) then t ≥ DP( k − , ˆ i, j ). Now consider fbox j in the free-space FS δ ( P, v ˆ i v i ). As illustratedin Figure 2 there are three cases, • If DP( k − , ˆ i, j ) > s i,j then there is no monotone path from (0 , DP( k − , ˆ i, j )) to (1 , t )for all t ∈ [ t i,j , s i,j ]. • If t i,j ≤ DP( k − , ˆ i, j ) ≤ s i,j . As mentioned at the beginning of the proof, t ≥ DP( k − , ˆ i, j ). Since fbox j is convex the line segment connecting (0 , DP( k − , ˆ i, j ) and (1 , DP( k − , ˆ i, j )) lies inside fbox j and hence lies inside FS δ ( P, v ˆ i v i ). Thus the smallest t ∈ [ t i,j , s i,j ]such that there is a monotone path from (0 , DP( k − , ˆ i, j )) to (1 , t ) in FS δ ( P, v ˆ i v i ) isDP( k − , ˆ i, j ). • If DP( k − , ˆ i, j ) ≤ t i,j . Again since fbox j is convex the line segment connecting (0 , DP( k − , ˆ i, j )) and (1 , t i,j ) lies inside fbox j and thus lies inside FS δ ( P, v ˆ i v i ). Thus the smallest t ∈ [ t i,j , s i,j ] such that there is a monotone path from (0 , DP( k − , ˆ i, j )) to (1 , t ) in FS δ ( P, v ˆ i v i ) is t i,j .Therefore, for any ˆ i < i if DP( k − , ˆ i, j ) > s i,j then there exists no t ∈ [ t i,j , s i,j ] suchthat δ F ( P [DP( k − , ˆ i, j ) . . . t ] , v ˆ i v i ) ≤ δ . Similarly if DP( k − , ˆ i, j ) ≤ s i,j then the minimal t ∈ [ t i,j , s i,j ] such that δ F ( P [DP( k − , ˆ i, j ) . . . t ] , v ˆ i v i ) ≤ δ is max(DP( k − , ˆ i, j ) , t i,j ).Now let t ∈ [ t i,j , s i,j ] be minimal such that DP( k − , i (cid:48) , j ) (cid:54) = ∞ and δ F ( P [DP( k − , i (cid:48) , j ) . . . t ] ,v i (cid:48) v i ) ≤ δ for some i (cid:48) < i . It follows that if min i (cid:48) s i,j , then no such t exists. Since min i (cid:48) s i,j , then DP ( k, i, j ) = ∞ and t does notexist.We now look at the monotone paths that intersect hor j . Observe that if the monotonepath intersects hor j then j (cid:48) < j . Along this line, we define DP ( k, i, j ) = 1 if there exists9 i,j tt i,j DP ( k − , i (cid:48) , j (cid:48) ) v i (cid:48) v i z s i,j tt i,j DP ( k − , i (cid:48) , j (cid:48) ) v i (cid:48) v i z Figure 3: Illustration of the proof of Observation 3.5. For t i,j ≤ t ≤ s i,j , there is a monotone pathfrom (0 , DP( k − , i (cid:48) , j (cid:48) )) to (1 , t ) in the free-space FS δ ( P, v i (cid:48) v i ) (left) for some i (cid:48) < i and j (cid:48) < j that intersect hor j at z . Then there is also a monotone path from (0 , DP( k − , i (cid:48) , j (cid:48) )) to (1 , t i,j )(right) in the free-space FS δ ( P, v i (cid:48) v i ) following the same monotone path from (0 , DP( k − , i (cid:48) , j (cid:48) )to z and then from z to (1 , t i,j ).some i (cid:48) < i and j (cid:48) < j , such that DP( k − , i (cid:48) , j (cid:48) ) (cid:54) = ∞ and there exists a monotone path from(0 , DP( k − , i (cid:48) , j (cid:48) )) to (1 , t i,j ) in the free-space FS δ ( P, v i (cid:48) v i ) and otherwise we set DP ( k, i, j ) = 0.Hereafter we define, DP ( k, i, j ) = (cid:26) t i,j if DP ( k, i, j ) = 1 ∞ otherwiseWe show a characterization of DP similar our characterization of DP in Lemma 3.2, andthus establishing that DP correctly handles all paths intersecting hor j . Observation 3.5. DP ( k, i, j ) is the minimal t ∈ [ t i,j , s i,j ] such that DP( k − , i (cid:48) , j (cid:48) ) (cid:54) = ∞ and δ F ( P [DP( k − , i (cid:48) , j (cid:48) ) . . . t ] , v i (cid:48) v i ) ≤ δ for some i (cid:48) < i and j (cid:48) < j . If no such t exists then DP ( k, i, j ) = ∞ .Proof. Let t ∈ [ t i,j , s i,j ] be minimal such that DP( k − , i (cid:48) , j (cid:48) ) (cid:54) = ∞ and δ F ( P [DP( k − , i (cid:48) , j (cid:48) ) . . . t ] ,v i (cid:48) v i ) ≤ δ for some i (cid:48) < i and j (cid:48) < j . If such a t exists then DP ( k, i, j ) = 1. Observe that for any i (cid:48) < i and j (cid:48) < j , if there is a monotone path from (0 , DP( k − , i (cid:48) , j (cid:48) )) to (1 , t ) in FS δ ( P, v i (cid:48) v i ),then the path intersects hor j (at say z ). Since fbox j is convex, the line segment connecting z and (1 , t i,j ) lies inside fbox j and hence inside FS δ ( P, v i (cid:48) v i ). Thus there is a monotone path from(0 , DP( k − , i (cid:48) , j (cid:48) )) to (1 , t i,j ) in FS δ ( P, v i (cid:48) v i ) following the monotone path from (0 , DP( k − , i (cid:48) , j (cid:48) )) to z and then from z to (1 , t i,j ) (see Figure 3). Since t ≥ t i,j and is minimal, wehave t = t i,j = DP ( k, i, j ). Similarly if such a t does not exist then DP ( k, i, j ) = 0 andDP ( k, i, j ) = ∞ . Lemma 3.6. DP( k, i, j ) = min(DP ( k, i, j ) , DP ( k, i, j )) .Proof. Follows directly from Observations 3.2, 3.4, and 3.5.In particular this yields a dynamic programming formulation for DP( k, i, j ), since bothDP ( k, i, j ) and DP ( k, i, j ) depends on values of DP( k (cid:48) , i (cid:48) , j (cid:48) ) with k (cid:48) < k , i (cid:48) < i and j (cid:48) ≤ j .We define κ ( i, j ) as the minimal k such that DP( k, i, j ) (cid:54) = ∞ . Similarly we define κ ( i, j )and κ ( i, j ) as the minimal k such that DP ( k, i, j ) (cid:54) = ∞ and DP ( k, i, j ) (cid:54) = ∞ respectively.10ote that κ ( i, j ) = min( κ ( i, j ) , κ ( i, j )) (by Lemma 3.6). Also note that both κ ( i, j ) and κ ( i, j ) depends only on the values of DP( k (cid:48) , i (cid:48) , j (cid:48) ) with k (cid:48) < k , i (cid:48) < i and j (cid:48) ≤ j .With these preparations can now present our dynamic programming algorithm, except forone subroutine κ -subroutine ( i ) that we describe in Section 3.3. In particular, for any i , κ -subroutine( i ) determines κ ( i, j ) for all j ∈ [ n ] in time T ( n ) only using the values of κ ( i (cid:48) , j ) forall i (cid:48) < i and all 0 ≤ j ≤ n − 1. Now we show how to update DP ( k, i, j ). Observe that for any i , j and k we can update DP ( k, i, j ) from DP ( k, i − , j ) and DP( k − , i − , j ) as DP ( k, i, j ) =min(DP ( k, i − , j ) , DP( k − , i − , j )). Thereafter we can update DP ( k, i, j ) by using theformulation in Lemma 3.4 and update κ ( i, j ) to the minimal k such that DP ( k, i, j ) (cid:54) = ∞ .This shows that we determine DP ( k, i, j ) and κ ( i, j ) in O (1) and O ( n ) time respectively. Nowwe show how to update DP ( k, i, j ). Notice that DP ( k, i, j ) = t i,j if and only if k ≥ κ ( i, j )and DP ( k, i, j ) = ∞ otherwise. Also, we can set κ ( i, j ) as min( κ ( i, j ) , κ ( i, j )). Hence, wecan determine DP ( k, i, j ) and κ ( i, j ) in O (1) time. Henceforth we can also update DP( k, i, j )by the formulation in Lemma 3.6 in O (1) time. Algorithm 1 Solving curve simplification under Global-Fr´echet distance Determine t i,j and s i,j for all 0 ≤ i ≤ n and 0 ≤ j ≤ n − Determine the largest j such that (cid:107) v − v j (cid:107) p ≤ δ for all j ≤ j Set DP ( k, , j ) , DP( k, , j ) to 0 for all j ≤ j and to ∞ otherwise (for all k ∈ [ n + 1]) Set κ (0 , j ) to 1 for all j ≤ j and to ∞ otherwise Set DP(0 , i, j ) to ∞ for all i, j ∈ [ n ] for i = 1 to n do Determine κ ( i, j ) for all ≤ j ≤ n − using κ -subroutine ( i ) for j = 0 to n − do for k = 1 to n + 1 do Set DP ( k, i, j ) to min(DP ( k, i − , j ) , DP( k − , i − , j )) Set DP ( k, i, j ) to max(DP ( k, i, j ) , t i,j ) if DP ( k, i, j ) ≤ s i,j and to ∞ otherwise Set κ ( i, j ) to the smallest k such that DP ( k, i, j ) (cid:54) = ∞ Set κ ( i, j ) = min( κ ( i, j ) , κ ( i, j )) for k = 1 to n + 1 do Set DP ( k, i, j ) to t i,j if k ≥ κ ( i, j ) and to ∞ otherwise Set DP( k, i, j ) to min(DP ( k, i, j ) , DP ( k, i, j )) Return κ ( n, n − O ( n · T ( n )) time for determining κ ( i, j ) for all i, j ∈ [ n ]. The time takento update κ ( i, j ) and κ ( i, j ) is O ( n ) and O (1) respectively. All the DP cells are updated in O (1) time. Since there are O ( n ) κ cells and O ( n ) DP cells, the total running time of ouralgorithm is O ( n + n · T ( n )). κ -subroutine ( i ) In this subsection we show how to implement step 7 of Algorithm 1 in time T ( n ) = O ( n ).Then in total we have O ( n ) for solving Global-Fr´echet simplification. We introduce an auxiliary problem that we call Cell Reachabilty . We shall see later that an O ( n ) time solution to this problem ensures that the κ -subroutine( i ) can be implemented intime T ( n ) = O ( n ). Definition 3.7. In an instance of the Cell Reachabilty problem, we are given ell Cell Cell Cell λ λ λ Cell Cell Cell Cell λ λ λ Cell Cell Cell Cell λ λ λ Figure 4: Illustrating an instance of Cell Reachability. The red horizontal line segments betweenthe cells indicate the passages. Note that cell 4 is only reachable from cells 2 and 3. therefore µ = min( λ , λ ) = min(4 , 8) = 4. • A set of n cells . Each cell j with ≤ j ≤ n is a unit square with corner points (0 , j ) and (1 , j + 1) . We say that cells j and j + 1 are consecutive . • An integral entry-cost λ j > for every cell j . • A set of n − between consecutive cells. The passage p j is the horizontal linesegment with endpoints ( j, a j ) and ( j, b j ) where b j > a j .We say that a cell j is reachable from a cell j (cid:48) with j (cid:48) < j if and only if there exists x j (cid:48) +1 ≤ x j (cid:48) +2 . . . ≤ x j such that x k ∈ [ a k , b k ] for every j (cid:48) < k ≤ j . Intuitively cell j is reachablefrom cell j (cid:48) if and only if there is a monotone path through the passages from cell j (cid:48) to cell j .We define the exit-cost µ j of a cell j as the minimal λ j (cid:48) such that j is reachable from cell j (cid:48) , j (cid:48) < j . The goal of the problem is to determine the sequence (cid:104) µ , µ , . . . , µ n (cid:105) . See Figure 4 foran illustration. We make a more refined notion of reachability. For any cells j and j (cid:48) < j we define the first reachable point frp ( j, j (cid:48) ) on cell j from cell j (cid:48) as the minimal t such that there exists x j (cid:48) +1 ≤ x j (cid:48) +2 ≤ . . . ≤ x j such that each x k ∈ [ a k , b k ] for every j (cid:48) < k ≤ j and x j = t andwe set frp ( j, j (cid:48) ) = ∞ if there exists no such t . Let t j ( k ) be the first reachable point on cell j from any cell j (cid:48) with entry-cost at most k i.e. t j ( k ) = min j (cid:48) We have t j ( k + 1) ≤ t j ( k ) for any j ∈ [ n ] and k ≥ .Proof. The minimum in the definition of t j ( k +1) is taken over a superset compared to t j ( k ).12 = λ j − a j − b j − a j b j t j − (9) t j − (8) t j − (4) t j − (2) t j − (1) t j (8) t j (4) t j (2) Figure 5: Illustration of the proof of Lemma 3.10. Determining the function t j ( · ) from a j , b j , t j − ( · ), and λ j − . For all k ≥ λ j − = 8 we have t j ( k ) = a j . For k = 2 and k = 4we have k < λ j − and t j − ( k ) ≤ b j , implying t j ( k ) = t j ( k − k = 1 we have t j − ( k ) > b j , implying t j ( k ) = ∞ . Lemma 3.10. For any j ∈ [ n ] and k ≥ we have t j ( k ) = a j if k ≥ λ j − a j if k < λ j − and t j − ( k ) ≤ a j t j − ( k ) if k < λ j − and t j − ( k ) ∈ ( a j , b j ] ∞ if k < λ j − and t j − ( k ) > b j Proof. See Figure 5 for an illustration. Note that frp ( j, j − 1) = a j . Therefore if λ j − ≤ k , then t j ( k ) = min j (cid:48) 1) = a j . Since t j ( k ) ≥ a j , we conclude that t j ( k ) = a j .Now we discuss the cases when k < λ j − . Let t j ( k ) = frp ( j, j (cid:48) ). Since λ j − > k we have j (cid:48) < j − 1. Therefore there exist x j (cid:48) +1 ≤ x j (cid:48) +2 ≤ . . . ≤ x j − ≤ x j such that x k ∈ [ a k , b k ] forevery j (cid:48) < k ≤ j with x j = t j ( k ). Note that t j − ( k ) ≤ x j − ≤ x j = t j ( k ). Thus t j ( k ) ≥ max( t j − ( k ) , a j ). In particular, if t j − ( k ) > b j , then t j ( k ) = ∞ . Now we look into the casewhen t j − ( k ) ≤ b j . Observe that if t j − ( k ) ≤ b j then there exists ˆ j < j − x ˆ j +1 ≤ x ˆ j +2 ≤ . . . ≤ x j − = t j − ( k ) such that x k ∈ [ a k , b k ] for every ˆ j < k ≤ j − 1. Setting x j = max( a j , t j − ( k )) and there exists x ˆ j +1 ≤ x ˆ j +2 ≤ . . . ≤ x j − ≤ x j such that x k ∈ [ a k , b k ]for every ˆ j ≤ k ≤ j and hence t j ( k ) ≤ max( a j , t j − ( k )). Combining the two inequalities we getthat t j ( k ) = max( a j , t j − ( k )) when t j − ( k ) ≤ b j .Lemma 3.10 yields a recursive definition for t j ( · ). To ensure that we can solve an instance of cell reachability in O ( n ) time, if suffices to determine t j ( · ) from t j − ( · ) and µ j from t j ( · ) in O (1)amortized time. To this end, let S j = { k ≥ | t j ( k ) < t j ( k − } and let L j be the doubly linkedlist storing the pairs ( k, t j ( k )) for every k ∈ S j , sorted in descending order of k (or equivalentlyin increasing order of t j ( k )). To develop some intuition note that for any k and j if we have t j ( k ) = t j ( k − j (cid:48) ≥ j that is reachable from a cell ˆ j ≤ j with entry-cost at most k is also reachable from some cell ˜ j ≤ j with entry-cost at most k − j with entry costs k . Therefore it suffices to focus on theset S j and the corresponding µ j . In particular we can determine µ j from S j as following, Lemma 3.11. The minimal positive k in S j is equal to µ j .Proof. Since t j (0) = ∞ , the minimal positive k in S j is the minimal k such that t j ( k ) (cid:54) = ∞ . ByObservation 3.8 this is equal to µ j . 13e now outline a simple algorithm to determine L j from L j − . Again see Figure 5 forillustration. The algorithm first determines k left , the minimal k such that t j ( k ) = a j , bymoving the head of the list L j − to the right as long as k ≥ λ j − or t j − ( k ) ≤ a j (correctnessfollows directly from Lemma 3.10). Observe that t j ( k ) = t j ( k left ) = a j for all k ≥ k left . Nextit determines k right , the minimal k such that t j ( k ) ≤ b j by moving the tail of L j − to theminimal k such that t j − ( k ) ≤ b j . Note that at this point we have already inserted ( k left , a j )so k right is guaranteed to exits.(Again correctness follows from Lemma 3.10). Observe that t j ( k ) = t j (0) = ∞ for all k < k right . Thus we have µ j = k right . Now we are left with updating L j for pairs with k ∈ ( k left , k right ). Note that for k ∈ ( k left , k right ), we have t j ( k ) = t j − ( k )(by Lemma 3.10) and therefore t j ( k ) = t j ( k − 1) if and only if t j − ( k ) = t j − ( k − L j corresponding to the values of k ∈ ( k left , k right ) is same as the sublist of L j − corresponding to the values of k ∈ ( k left , k right ). Finally the algorithm appends a new node to L j storing (0 , ∞ ) (since t j (0) = ∞ ). Algorithm 2 Determining L j from L j − L ← L j − k left ← λ j while k ≥ λ j or t ≤ a j , where ( k, t ) = L. front () do k left ← min( k left , k ) L. popfront () L. pushfront (( k left , a j ) while t > b j , where ( k, t ) = L. back () do L. popback () Set µ j = k , where ( k, t ) = L. back (). L. pushback ((0 , ∞ )) L j ← L The number of operations performed to determine L j from L j − and determining µ j from L j is O (1 + d ) where d is the number of pairs deleted from L j − . Since every deleted pair waspreviously inserted, we can pay for the deletions by paying an extra token per insertion. Notethat there are two insertions per update. Hence the total time taken to determine L j and µ j for all j ∈ [ n ] is O ( n ). Theorem 3.12. Cell Reachability can be solved in O ( n ) time. κ -subroutine ( i ) using Cell Reachability Recall the definition of κ ( · , · ) and what our goal is now. For a fixed i (cid:48) < i , let κ ( i, j, i (cid:48) ) bethe minimal k such that for some j (cid:48) < j , we have DP( k − , i (cid:48) , j (cid:48) ) (cid:54) = ∞ and δ F ( P [DP( k − , i (cid:48) , j (cid:48) ) . . . t i,j ] , v i (cid:48) v i ) ≤ δ . Note that κ ( i, j ) = min i (cid:48)
1] in O ( n ) time. Observation 3.13. Let the line segment with endpoints ( a j , j ) and ( b j , j ) denote the free-space on hor j in FS δ ( P, v i (cid:48) v i ) where i (cid:48) < i . Then for any j (cid:48) < j there is a monotone pathfrom (0 , DP( κ ( i (cid:48) , j (cid:48) ) , i (cid:48) , j (cid:48) )) to (1 , t i,j ) in the free-space FS δ ( P, v i (cid:48) v i ) if and only if there exist x j (cid:48) +1 ≤ x j (cid:48) +2 ≤ . . . x j with each x k ∈ [ a k , b k ] for all j (cid:48) < k ≤ j .Proof. The “only if” direction is straightforward. Note that the monotone path from (0 , DP( κ ( i (cid:48) , j (cid:48) ) , i (cid:48) , j (cid:48) )) to (1 , t i,j ) in the free-space FS δ ( P, v i (cid:48) v i ) intersects hor k for all j (cid:48) < k ≤ j . Let x k be the intersection of the path with hor k for j (cid:48) < k ≤ j . Since the path lies inside thefree-space FS δ ( P, v i (cid:48) v i ) we have x k ∈ [ a k , b k ] for every j (cid:48) < k ≤ j . Since the path is monotone14 i (cid:48) v it i (cid:48) ,j (cid:48) s i (cid:48) ,j (cid:48) DP( k,i (cid:48) ,j (cid:48) )DP( κ ( i (cid:48) ,j (cid:48) ) ,i (cid:48) ,j (cid:48) ) t i,j s i,j z v i (cid:48) v it i (cid:48) ,j (cid:48) s i (cid:48) ,j (cid:48) DP( k,i (cid:48) ,j (cid:48) )DP( κ ( i (cid:48) ,j (cid:48) ) ,i (cid:48) ,j (cid:48) ) t i,j s i,j z Figure 6: Illustration of the proof of Observation 3.14. For any i (cid:48) < i , j (cid:48) < j and any k , there isa monotone path from (0 , DP( k, i (cid:48) , j (cid:48) )) to (1 , t i,j ) in FS δ ( P, v i (cid:48) v i ) (left) that intersects hor j at z . Then there is a monotone path from (0 , DP( κ ( i (cid:48) , j (cid:48) ) , i (cid:48) , j (cid:48) )) to (1 , t i,j ) in FS δ ( P, v i (cid:48) v i ) (right)by walking from (0 , DP( κ ( i (cid:48) , j (cid:48) ) , i (cid:48) , j (cid:48) ) to z and then following the existing monotone path from z to (1 , t i,j ).we have x j (cid:48) +1 ≤ x j (cid:48) +2 ≤ . . . ≤ x j .Now we show the “if” direction. Assume there exist x j (cid:48) +1 ≤ x j (cid:48) +2 ≤ . . . ≤ x j and x k ∈ [ a k , b k ]for every j (cid:48) < k ≤ j . Since every fbox k is convex for every j (cid:48) < k < j , the line segment withendpoints as ( x k , k ) and ( x k +1 , k +1) lies inside fbox k . By the same convexity argument it followsthat the line segment with endpoints (0 , DP( κ ( i (cid:48) , j (cid:48) ) , i (cid:48) , j (cid:48) ) and ( x j (cid:48) +1 , j (cid:48) + 1) lies inside fbox j (cid:48) and the line segment with endpoints ( x j , j ) and (1 , t i,j ) also lies inside fbox j . Therefore we havea monotone path namely (cid:104) (0 , DP( κ ( i (cid:48) , j (cid:48) ) , i (cid:48) , j (cid:48) ) , ( x j (cid:48) +1 , j (cid:48) + 1) , ( x j (cid:48) +2 , j (cid:48) + 2) . . . ( x j , j )(1 , t i,j ) (cid:105) inside the free-space FS δ ( P, v i (cid:48) v i ) from (0 , DP( κ ( i (cid:48) , j (cid:48) ) , i (cid:48) , j (cid:48) ) to (1 , t i,j ). Observation 3.14. For any i (cid:48) < i if there is a monotone path from (0 , DP( k, i (cid:48) , j (cid:48) )) to (1 , t i,j ) inthe free-space FS δ ( P, v i (cid:48) v i ) intersecting hor j , then there is also a monotone path from (0 , DP( κ ( i (cid:48) , j (cid:48) ) , i (cid:48) , j (cid:48) )) to (1 , t i,j ) in the free-space FS δ ( P, v i (cid:48) v i ) intersecting hor j .Proof. This is obvious by inspecting the free-space FS δ ( P, v i (cid:48) v i ) as follows. Since the monotonepath intersects hor j , we have j (cid:48) < j . Observe that both DP( k, i (cid:48) , j (cid:48) ) and DP( κ ( i (cid:48) , j (cid:48) ) , i (cid:48) , j (cid:48) ) lie inthe interval [ t i (cid:48) ,j (cid:48) , s i (cid:48) ,j (cid:48) ]. Also let z be the point at which the monotone path intersects hor j (cid:48) +1 .Then there is a monotone path in FS δ ( P, v i (cid:48) , v i ) from z to (1 , t i,j ). Since fbox j (cid:48) is convex (By Fact3.3) the line segment joining (0 , DP( κ ( i (cid:48) , j (cid:48) ) , i (cid:48) , j (cid:48) )) and z is contained in fbox j (cid:48) . Therefore thereis a monotone path from (0 , DP( κ ( i (cid:48) , j (cid:48) ) , i (cid:48) , j (cid:48) )) to (1 , t i,j ) by walking from (0 , DP( κ ( i (cid:48) , j (cid:48) ) , i (cid:48) , j (cid:48) ))to z and then follow the monotone path from z to (1 , t i,j ).Observations 3.13 and 3.14 imply that κ ( i, j, i (cid:48) ) is the minimal value of 1 + κ ( i (cid:48) , j (cid:48) ) over all j (cid:48) < j such that there exist x j (cid:48) +1 ≤ x j (cid:48) +2 ≤ . . . ≤ x j with every x k ∈ [ a k , b k ] for all j (cid:48) < k ≤ j .Note that now we are “almost” in an instance of Cell Reachability problem where the passage p j corresponds to the free space on hor j and each λ j = 1 + κ ( i (cid:48) , j ). The only problem is that thefree space on some hor j could be empty (while in Cell Reachability section we never had emptypassages). However if the free space on any hor j is empty then there exists no monotone pathin the free-space FS δ ( P, v i (cid:48) v i ) from any any point below hor j to any point above hor j . Thuswe can split the instance into two disjoint instances of Cell Reachability. Thus for any fixed i (cid:48) we can determine κ ( i, j, i (cid:48) ) in O ( n ) time and therefore we can implement κ -subroutine( i ) forany i ∈ [ n ] in T ( n ) = O ( n ). 15 Conditional Lower Bound for Curve Simplification In this section we show that an O ( n − ε poly( d )) time algorithm for Global-Fr´echet, Local-Fr´echetor Local-Hausdorff simplification over ( R d , (cid:107)(cid:107) p ) for any p ∈ [1 , ∞ ), p (cid:54) = 2, would yield an O ( n − ε poly ( d )) algorithm for ∀∀∃ -OV. We first give an overview of the reduction. Consider any instance ( A, B, C ) of ∀∀∃ -OV where A , B , C ⊆ { , } d have size n . We write A = { a , a , . . . a n } , B = { b , b , . . . b n } and C = { c , c , . . . c n } . We will construct efficiently a total of 3 n + 1 points in R D with D ∈ O ( d )namely the sets of points ˜ A = { ˜ a , ˜ a , . . . ˜ a n } , ˜ B = (cid:110) ˜ b , ˜ b , . . . ˜ b n (cid:111) and ˜ C = { ˜ c , ˜ c , . . . ˜ c n } andone more point s . We also determine δ ≥ P ) For any ˜ a ∈ ˜ A, ˜ b ∈ ˜ B, ˜ c ∈ ˜ C , there is a point x on the line segment ˜ a ˜ b with (cid:107) x − ˜ c (cid:107) p ≤ δ if and only if (cid:107) ˜ a +˜ b − ˜ c (cid:107) p ≤ δ .( P ) For any ˜ a ∈ ˜ A, ˜ b ∈ ˜ B, ˜ c ∈ ˜ C , we have (cid:107) ˜ a +˜ b − ˜ c (cid:107) p ≤ δ if and only if (cid:80) (cid:96) ∈ [ d ] a [ (cid:96) ] · b [ (cid:96) ] · c [ (cid:96) ] (cid:54) = 0.( P ) (cid:107) x − y (cid:107) p ≤ δ holds for all x, y ∈ ˜ A , and for all x, y ∈ ˜ B and for all x, y ∈ ˜ C .( P ) For any y , y ∈ { s }∪ ˜ B ∪ ˜ C and any point x on the line segment y y we have (cid:107) x − ˜ a (cid:107) p > δ for all ˜ a ∈ ˜ A .( P ) For any y , y ∈ { s } ∪ ˜ A ∪ ˜ C and any point x on the line segment y y we have (cid:107) x − ˜ b (cid:107) p > δ for all ˜ b ∈ ˜ B .( P ) For any y ∈ ˜ B ∪ ˜ A and any point x on the line segment sy we have (cid:107) x − ˜ c (cid:107) p > δ for all˜ c ∈ ˜ C .We postpone the exact construction of these points. Our hard instance for curve simplifica-tion will be Q = (cid:104) s, ˜ a , ˜ a , . . . , ˜ a n , ˜ c , ˜ c , . . . , ˜ c n , ˜ b , ˜ b , . . . , ˜ b n , s (cid:105) . Lemma 4.1. Let ˆ Q = (cid:104) s, ˜ a i , ˜ b j , s (cid:105) for some ˜ a i ∈ ˜ A and ˜ b j ∈ ˜ B . If (cid:107) ˜ a i +˜ b j − ˜ c (cid:107) p ≤ δ for all ˜ c ∈ ˜ C then the Local-Frechet distance between Q and ˆ Q is at most δ .Proof. Both Q and ˆ Q have the same starting point s . By property P we have (cid:107) ˜ a − ˜ a i (cid:107) p ≤ δ for all ˜ a ∈ ˜ A , and (cid:107) ˜ b − ˜ b j (cid:107) p ≤ δ for all ˜ b ∈ ˜ B . Thus it follows that δ F ( (cid:104) s, ˜ a , . . . , ˜ a i (cid:105) , s ˜ a i ) ≤ δ and δ F ( (cid:104) ˜ b j , . . . , ˜ b n , s (cid:105) , ˜ b j s ) ≤ δ . It remains to show that δ F ( Q ij , ˜ a i ˜ b j ) ≤ δ where Q ij = (cid:104) ˜ a i , . . . , ˜ a n , ˜ c , . . . , ˜ c n , ˜ b , . . . , ˜ b j (cid:105) . To this end first note that both polylines Q ij and ˜ a i ˜ b j havethe same endpoints. We now outline monotone walks on both Q ij and ˜ a i ˜ b j .(1) Walk on Q ij from ˜ a i to ˜ a n and remain at ˜ a i on ˜ a i ˜ b j .(2) Walk uniformly on both polylines, up to ˜ a i +˜ b j on ˜ a i ˜ b j and up to ˜ c on Q ij .(3) Walk on Q ij from ˜ c to ˜ c n and remain at ˜ a i +˜ b j on ˜ a i ˜ b j .(4) Walk uniformly on both curves up to ˜ b j on ˜ a i ˜ b j and up to ˜ b on Q ij .(5) Walk on Q ij until ˜ b j and remain at ˜ b j on ˜ a ˜ b j .16e now argue that we always stay within distance δ throughout the walks. For (1) and (5) thisfollows due to property P . For (2) and (4) it follows due to the fact we always remain withindistance δ while walking with uniform speed on two line segments, as long as their startpointsand their endpoints are within distance δ . By the assumption (cid:107) ˜ a i +˜ b j − ˜ c (cid:107) p ≤ δ for all ˜ c ∈ ˜ C ,we always stay within distance δ also for (3).Observe that property P implies that there is a simplification of size five namely ˆ Q = (cid:104) s, ˜ a, ˜ c, ˜ b, s (cid:105) for any ˜ a ∈ ˜ A , ˜ b ∈ ˜ B , and ˜ c ∈ ˜ C , such that the distance between ˆ Q and Q is atmost δ under Local-Fr´echet, Global-Fr´echet and Local-Hausdorff distance. We now show thata smaller simplification is only possible if there exist a ∈ A , b ∈ B such that for all c ∈ C wehave (cid:80) (cid:96) ∈ [ d ] a [ (cid:96) ] · b [ (cid:96) ] · c [ (cid:96) ] (cid:54) = 0. Lemma 4.2. Let ˆ Q be a simplification of the polyline Q of size 4. Then the following statementsare equivalent(1) The Global-Fr´echet distance between Q and ˆ Q is at most δ .(2) The Local-Fr´echet distance between Q and ˆ Q is at most δ .(3) The Local-Hausdorff distance between Q and ˆ Q is at most δ .(4) There exist some ˜ a ∈ ˜ A , ˜ b ∈ ˜ B , such that ˆ Q = (cid:104) s, ˜ a, ˜ b, s (cid:105) and (cid:107) ˜ a +˜ b − ˜ c (cid:107) p ≤ δ for every ˜ c ∈ ˜ C .(5) There exist a ∈ A , b ∈ B such that for all c ∈ C we have (cid:80) (cid:96) ∈ [ d ] a [ (cid:96) ] · b [ (cid:96) ] · c [ (cid:96) ] (cid:54) = 0 .Proof. We first show that (1), (2) and (3) are equivalent to (4). To this end, we first show thateach of (1), (2) and (3) imply (4). Since for any y , y ∈ s ∪ ˜ B ∪ ˜ C there is no point on the linesegment y y that has distance at most δ to any ˜ a ∈ ˜ A (by property P ), ˆ Q must contain atleast one point from ˜ A . A symmetric argument can be made for the fact that ˆ Q must contain atleast one point from ˜ B (property P ). Since the size of ˆ Q is 4, we have ˆ Q = (cid:104) s, ˜ a, ˜ b, s (cid:105) for some˜ a ∈ ˜ A and ˜ b ∈ ˜ B . By property P there is no point on the line segments s ˜ a and ˜ bs that hasdistance at most δ to any ˜ c ∈ C . Therefore the Global-Fr´echet distance or the Local-Fr´echetdistance or the Local-Hausdorff distance between Q and ˆ Q is at most δ only if for all ˜ c ∈ ˜ C there is a point on the line segment ˜ a ˜ b that has distance at most δ to ˜ c . By property P , thisimplies that (cid:107) ˜ a +˜ b − ˜ c (cid:107) p ≤ δ for all ˜ c ∈ ˜ C .Now we show that (4) implies (1), (2) and (3). First observe that (2) implies (1) and (3),since the Local-Fr´echet distance between a curve and its simplification is at least the Global-Fr´echet distance and at least the Local-Hausdorff distance between the same. Thus, it sufficesto show that (4) implies (2). This directly follows from Lemma 4.1. Finally, (4) and (5) areequivalent due to property P .Assuming that we can construct Q and determine δ in O ( nd ) time, the above lemma directlyyields the following theorem, Theorem 4.3. For any ε > , there is no O ( n − ε poly( d )) algorithm for Global-Fr´echet, Local-Fr´echet and Local-Hausdorff simplification over ( R d , (cid:107)(cid:107) p ) for any p ∈ [1 , ∞ ) , p (cid:54) = 2 unless ∀∀∃ OVH fails.Proof. The curve Q can be constructed and δ can be determined in O ( nd ) from any instance A, B, C of ∀∀∃ -OV. Henceforth, by Lemma 4.2 the simplification problem is equivalent to ∀∀∃ -OV. Thus any O ( n − ε poly( d )) algorithm for the curve simplification problem will yield an O ( n − ε poly( d )) algorithm for ∀∀∃ -OV as well.17t remains to construct the point s and the sets ˜ A , ˜ B and ˜ C and determine δ in O ( nd ).We first introduce some notations. For vectors x and y and α ∈ [ − , ], we define P xy ( α ) as( − α ) x + ( + α ) y . Moreover let u i ∈ R d . We write v = (cid:2) u u . . . u m (cid:3) for the vector v ∈ R md with v [( j − d + k ] = u j [ k ] for any j ∈ [ m ] and k ∈ [ d ]. Observation 4.4. Let u , u , . . . , u m ∈ R d and v = (cid:2) u u . . . u m (cid:3) . Then we have (cid:107) v (cid:107) pp = (cid:80) i ∈ [ m ] (cid:107) u i (cid:107) pp . In this section our aim is to construct points A i , B i , C i for i ∈ { , } such that the distance (cid:107) C i − P A j B k (0) (cid:107) p only depends on whether the bits i, j, k ∈ { , } seen as cordinates of vectorsare orthogonal. In other words the points A i , B i , C i form a cordinate gadget. Formally we willprove the following lemma, Lemma 4.5. For any p (cid:54) = 2 (cid:107) C i − P A j B k (0) (cid:107) pp = (cid:26) β if i = 1 , j = 1 , k = 1 β otherwisewhere β < β . In Section 4.3 we will use this lemma to construct the final point sets ˜ A , ˜ B and ˜ C .Let θ , θ , θ , θ and θ be positive constants. We construct the points A , B , C and A , B , C in R as follows, A = (cid:2) − θ , , − θ , , θ , θ , θ , − θ , (cid:3) A = (cid:2) θ , θ , θ , − θ , , − θ , − θ , , (cid:3) B = (cid:2) − θ , , θ , θ , θ , − θ , − θ , , (cid:3) B = (cid:2) θ , − θ , − θ , , , − θ , θ , θ , (cid:3) C = (cid:2) , , , , , , , , θ (cid:3) C = (cid:2) − θ , , − θ , , − θ , , − θ , , (cid:3) From these points we can compute the points P A i B j (0) for all i, j ∈ { , } . P A B (0) = (cid:2) − θ , , , θ , θ , , , − θ , (cid:3) P A B (0) = (cid:2) , θ , θ , , , − θ , − θ , , (cid:3) P A B (0) = (cid:2) θ , , , − θ , − θ , , , θ , (cid:3) P A B (0) = (cid:2) , − θ , − θ , , , θ , θ , , (cid:3) Observe that (cid:107) C − P A i B j (0) (cid:107) pp = (cid:80) r ∈ [5] θ pr for all i, j ∈ { , } . Thus all the points P A i B j (0) are equidistant from C irrespective of the exact values of the θ r for r ∈ [5]. Note that when θ r = θ for all r ∈ [5], then (cid:107) C − P A i B j (0) (cid:107) pp = 4 θ p + 2 p θ p for all i, j ∈ { , } . Thus all thepoints P A i B j (0) are equidistant from C when all the θ r are the same. We now determine θ r for r ∈ [5] such that all but one point in (cid:8) P A i B j (0) | i, j ∈ { , } (cid:9) are equidistant and far from C . More precisely, (cid:107) C − P A i B j (0) (cid:107) pp = (cid:26) β if i = 1 , j = 1 β otherwiseand β < β .We first quantify the distances from { C , C } to each of the points in (cid:8) P A j B k (0) | j, k ∈ { , } (cid:9) .18 emma 4.6. We have (cid:107) C i − P A j B k (0) (cid:107) pp = (cid:80) r ∈ [5] θ pr if i = 02 θ p + 2 p θ p + 2 θ p if i = 1 , j = 0 , k = 02 θ p + 2 p θ p + 2 θ p if i = 1 , j = 1 , k = 02 θ p + 2 θ p + 2 p θ p if i = 1 , j = 0 , k = 12 p θ p + 2 θ p + 2 θ p if i = 1 , j = 1 , k = 1We now set the exact values of θ r for r ∈ [5]. We define values depending on p . When1 ≤ p < θ = (2 p − − p , θ = 0 , θ = 1 , θ = 0 , θ = 2 p − p Now we make the following observation, Observation 4.7. When ≤ p < , then (cid:107) C i − P A j B k (0) (cid:107) pp = (cid:26) p (2 p − − if i = 1 , j = 1 , k = 12 p otherwiseProof. Substituting the values of θ k for every k ∈ [4] in Lemma 4.6 we have that (cid:107) C − P A j B k (0) (cid:107) pp = 2 p − − p + 1 p + 0 p + 2 p − = 2 p (cid:107) C − P A B (0) (cid:107) pp = 2 · p + 2 p · p + 2 · p = 2 p (cid:107) C − P A B (0) (cid:107) pp = 2 · (2 p − − 1) + 2 p · p + 2 · p = 2 p (cid:107) C − P A B (0) (cid:107) pp = 2 · (2 p − − 1) + 2 · p + 2 p · p = 2 p (cid:107) C − P A B (0) (cid:107) pp = 2 p · (2 p − − 1) + 2 · p + 2 p · p = 2 p (2 p − − p > 2. Then we set θ = 0 , θ = (2 p − p , θ = (2 p − p , θ = (2 p − p , θ = (2 p − · p ) p We make a similar observation, Observation 4.8. When p > , then (cid:107) C i − P A j B k (0) (cid:107) pp = (cid:26) p +2 − if i = 1 , j = 1 , k = 12 p − otherwiseProof. Substituting the values of θ k for every k ∈ [4] in Lemma 4.6 we have that (cid:107) C − P A j B k (0) (cid:107) pp = 0 p + (2 p − 2) + (2 p − 4) + (2 p − 2) + (2 p − · p ) = 2 p − (cid:107) C − P A B (0) (cid:107) pp = 2 · (2 p − 2) + 2 p · (2 p − 4) + 2 · (2 p − 2) = 2 p − (cid:107) C − P A B (0) (cid:107) pp = 2 · p + 2 p · (2 p − 2) + 2 · (2 p − 4) = 2 p − (cid:107) C − P A B (0) (cid:107) pp = 2 · p + 2 · (2 p − 4) + 2 p · (2 p − 2) = 2 p − (cid:107) C − P A B (0) (cid:107) pp = 2 p · p + 2 · (2 p − 2) + 2 · (2 p − 2) = 2 p +2 − . .3 Vector gadgets For every a ∈ A , b ∈ B and c ∈ C we introduce vectors a (cid:48) , b (cid:48) , c (cid:48) and a (cid:48)(cid:48) b (cid:48)(cid:48) , c (cid:48)(cid:48) and then concatenatethe respective vectors to form ˜ a , ˜ b and ˜ c respectively. Intuitively a (cid:48) , b (cid:48) , c (cid:48) primarily helps us toensure properties P and P , while a (cid:48)(cid:48) b (cid:48)(cid:48) , c (cid:48)(cid:48) help us ensure the remaining properties. a (cid:48) , b (cid:48) , c (cid:48) , and s (cid:48) We construct the vector s (cid:48) and the vectors a (cid:48) , b (cid:48) and c (cid:48) for every a ∈ A , b ∈ B and c ∈ C respectively, in R d as follows, a (cid:48) = (cid:2) A a [1] , A a [2] , . . . A a [ d ] (cid:3) (2) b (cid:48) = (cid:2) B b [1] , B b [2] , . . . B b [ d ] (cid:3) (3) c (cid:48) = (cid:2) C c [1] , C c [2] , . . . C c [ d ] (cid:3) (4) s (cid:48) = (cid:2) , , . . . , (cid:3) (5)We also define the sets A (cid:48) = { a (cid:48) | a ∈ A } , B (cid:48) = { b (cid:48) | b ∈ B } and C (cid:48) = { c (cid:48) | c ∈ C } . We nowmake a technical observation about the vectors in A (cid:48) , B (cid:48) , and C (cid:48) , that will be useful later. Weset η = max i ∈ [5] θ i . Observation 4.9. For any x, y ∈ A (cid:48) ∪ B (cid:48) ∪ C (cid:48) , we have (cid:107) x − y (cid:107) p ≤ η where η : = 36 dη .Proof. Note that the absolute value of every cordinate of the vectors A , B , C and A , B , C is bounded by 2 η (Since every cordinate is of the form ± θ r or ± θ r or 0). Also every cordinateof a (cid:48) , b (cid:48) , and c (cid:48) , is a cordinate of one of A , B , C , A , B and C . Therefore for any x, y ∈ A (cid:48) ∪ B (cid:48) ∪ C (cid:48) we have max (cid:96) ∈ [9 d ] | x [ (cid:96) ] − y [ (cid:96) ] | ≤ η . Hence we have (cid:107) x − y (cid:107) p ≤ (cid:80) (cid:96) ∈ [9 d ] | x [ (cid:96) ] − y [ (cid:96) ] | ≤ d · η = 36 dη = η .Note that a ∈ A , b ∈ B and c ∈ C are non orthogonal if and only if c,a,b > 0. The followingLemma shows a connection between non-orthogonality and small distance (cid:107) c (cid:48) − P a (cid:48) b (cid:48) (0) (cid:107) p . Lemma 4.10. For any a ∈ A , b ∈ B and c ∈ C we have (cid:107) c (cid:48) − P a (cid:48) b (cid:48) (0) (cid:107) pp = dβ − ( β − β ) c,a,b .Proof. By Lemma 4.5, for any α ∈ [ − , ] (cid:107) C c [ (cid:96) ] − P A a [ (cid:96) ] B b [ (cid:96) ] (0) (cid:107) pp = (cid:26) β if c [ (cid:96) ] = a [ (cid:96) ] = b [ (cid:96) ] = 1 β otherwiseBy Observation 4 . (cid:107) c (cid:48) − P a (cid:48) b (cid:48) (0) (cid:107) pp = (cid:88) (cid:96) ∈ [ d ] (cid:107) C c [ (cid:96) ] − P A a [ (cid:96) ] B b [ (cid:96) ] (0) (cid:107) pp = β ( d − c,a,b ) + β c,a,b = dβ − ( β − β ) c,a,b . a (cid:48)(cid:48) , b (cid:48)(cid:48) , c (cid:48)(cid:48) , and s (cid:48)(cid:48) We construct the vector s (cid:48)(cid:48) and the vectors a (cid:48)(cid:48) , b (cid:48)(cid:48) , and c (cid:48)(cid:48) for every a ∈ A , b ∈ B , and c ∈ C ,respectively in R as follows, 20 (cid:48)(cid:48) = (cid:2) γ , , (cid:3) b (cid:48)(cid:48) = (cid:2) γ , γ , (cid:3) c (cid:48)(cid:48) = (cid:2) , γ , (cid:3) s (cid:48)(cid:48) = (cid:2) , γ , γ ]where γ , γ are positive constants. We are now ready to define the final points of our construc-tion, s and ˜ a , ˜ b and ˜ c for any a ∈ A , b ∈ B and c ∈ C respectively.˜ a = (cid:2) a (cid:48) , a (cid:48)(cid:48) (cid:3) ˜ b = (cid:2) b (cid:48) , b (cid:48)(cid:48) (cid:3) ˜ c = (cid:2) c (cid:48) , c (cid:48)(cid:48) (cid:3) s = (cid:2) s (cid:48) , s (cid:48)(cid:48) (cid:3) We set γ = η , δ = ( γ p + dβ − ( β − β )) p , γ = max (cid:32) δ, η (cid:32) γ p + dβ ) p ( γ p + dβ ) p − δ (cid:33)(cid:33) Note that we have constructed the point sets ˜ A , ˜ B , ˜ C , and the point s and determined δ in total time O ( nd ). Therefore now it suffices to show that our point set and δ satisfy theproperties P , P , P , P , P , and P . To this end we first show how the distance (cid:107) ˜ c − P ˜ a ˜ b ( α ) (cid:107) p is related with c,a,b (the non orthogonality of the vectors a , b , and c ) by the following lemma. Lemma 4.11. For any a ∈ A , b ∈ B and c ∈ C we have, • (cid:107) ˜ c − P ˜ a ˜ b (0) (cid:107) pp = γ p + β d − ( β − β ) c,a,b . • If c,a,b = 0 then (cid:107) ˜ c − P ˜ a ˜ b ( α ) (cid:107) pp > δ for all α ∈ [ − , ] .Proof. Note that˜ c − P ˜ a ˜ b ( α ) = (cid:2) c (cid:48) − P a (cid:48) b (cid:48) ( α ) , − γ , − γ α, (cid:3) = (cid:2) c (cid:48) − P a (cid:48) b (cid:48) (0) , − γ , − γ α, (cid:3) − (cid:2) P a (cid:48) b (cid:48) ( α ) − P a (cid:48) b (cid:48) (0) , , , (cid:3) Thus substituting α as 0, (cid:107) ˜ c − P ˜ a ˜ b (0) (cid:107) pp = (cid:107) [ c (cid:48) − P a (cid:48) b (cid:48) (0) , − γ ] (cid:107) pp = γ p + (cid:107) c (cid:48) − P a (cid:48) b (cid:48) (0) (cid:107) pp = γ p + dβ − ( β − β ) c,a,b (by Lemma 4.10)Furthermore, by reverse triangle inequality we have (cid:107) ˜ c − P ˜ a ˜ b ( α ) (cid:107) p ≥ (cid:107) (cid:2) c (cid:48) − P a (cid:48) b (cid:48) (0) , − γ , − γ α, (cid:3) (cid:107) p − (cid:107) (cid:2) P a (cid:48) b (cid:48) ( α ) − P a (cid:48) b (cid:48) (0) , , , (cid:3) (cid:107) p = (cid:107) (cid:2) c (cid:48) − P a (cid:48) b (cid:48) (0) , − γ , − γ α (cid:3) (cid:107) p − (cid:107) (cid:2) P a (cid:48) b (cid:48) ( α ) − P a (cid:48) b (cid:48) (0) (cid:3) (cid:107) p . We bound the two summands on the right hand side. Note that (cid:107) (cid:2) c (cid:48) − P a (cid:48) b (cid:48) (0) , − γ , − γ α (cid:3) (cid:107) p ≥ max(( γ p + dβ − ( β − β ) c,a,b ) p , | α | γ ). We also have (cid:107) (cid:2) P a (cid:48) b (cid:48) ( α ) − P a (cid:48) b (cid:48) (0) (cid:3) (cid:107) p = | α |(cid:107) b − a (cid:107) p ≤| α | η (by Observation 4.9). Therefore when c,a,b = 0, for any α ∈ [ − , ] we have, (cid:107) ˜ c − P ˜ a ˜ b ( α ) (cid:107) p ≥ max(( γ p + dβ ) p , | α | γ ) − | α | η | α | < η (( γ p + dβ ) p − δ ), then (cid:107) ˜ c − P ˜ a ˜ b ( α ) (cid:107) p > ( γ p + dβ ) p − (( γ p + dβ ) p − δ )= δ Similarly if | α | ≥ η (( γ p + dβ ) p − δ ), we have (cid:107) ˜ c − P ˜ a ˜ b ( α ) (cid:107) p ≥ | α | γ − | α | η = | α | ( γ − η ) ≥ η ( γ p + dβ ) p − δ ) · η (cid:18) ( γ p + dβ ) p ( γ p + dβ ) p − δ (cid:19) ( substituting γ and α )= ( γ p + dβ ) p > δ. Combining the two cases, we arrive at the second result of the lemma.We now verify properties P , P , P , P , P , and P . Lemma 4.12 ( P ) . For any a ∈ A , b ∈ B and c ∈ C we have (cid:107) ˜ c − P ˜ a ˜ b (0) (cid:107) p ≤ δ if and only if c,a,b ≥ or equivalently when (cid:80) (cid:96) ∈ [ d ] a [ (cid:96) ] · b [ (cid:96) ] · c [ (cid:96) ] (cid:54) = 0 .Proof. By Lemma 4.11 we have that (cid:107) ˜ c − P ˜ a ˜ b (0) (cid:107) p = ( γ p + dβ − ( β − β ) c,a,b ) p . Therefore, if c,a,b ≥ (cid:107) ˜ c − P ˜ a ˜ b (0) (cid:107) p ≤ δ . Conversely if c,a,b = 0 then (cid:107) ˜ c − P ˜ a ˜ b (0) (cid:107) p = ( γ p + dβ ) p > ( γ p + dβ − ( β − β )) p = δ . Lemma 4.13 ( P ) . For any a ∈ A , b ∈ B and c ∈ C we have (cid:107) ˜ c − P ˜ a ˜ b ( α ) (cid:107) p ≤ δ for any α ∈ [ − , ] , if and only if (cid:107) ˜ c − P ˜ a ˜ b (0) (cid:107) p ≤ δ .Proof. The “if” statement is trivial as (cid:107) ˜ c − P ˜ a ˜ b ( α ) (cid:107) p ≤ δ for α = 0. For the “only if” case,since (cid:107) ˜ c − P ˜ a ˜ b (0) (cid:107) p > δ , from Lemma 4.12 it follows that c,a,b = 0. By Lemma 4 . 11 weobtain (cid:107) ˜ c − P ˜ a ˜ b ( α ) (cid:107) pp > δ for all α ∈ [ − , ]. Therefore there exists no α ∈ [ − , ] such that (cid:107) ˜ c − P ˜ a ˜ b ( α ) (cid:107) p ≤ δ . Lemma 4.14 ( P ) . We have (cid:107) x − y (cid:107) p ≤ δ for all x, y ∈ ˜ A , and for all x, y ∈ ˜ B and for all x, y ∈ ˜ C .Proof. We prove the case of x, y ∈ ˜ A ; the other cases are analogous. Consider any ˜ a , ˜ a ∈ ˜ A .Note that (cid:107) ˜ a − ˜ a (cid:107) p = (cid:107) a (cid:48) − a (cid:48) (cid:107) p . By Observation 4.9, we have (cid:107) a (cid:48) − a (cid:48) (cid:107) p ≤ η < γ ≤ δ .We now prove properties P , P and P . Lemma 4.15 ( P , P , and P ) . For any a ∈ A , b ∈ B and c ∈ C and α ∈ [ − , ] the followingproperties hold.1. For any y , y ∈ { s } ∪ ˜ B ∪ ˜ C , we have (cid:107) ˜ a − P y y ( α ) (cid:107) p > δ for all ˜ a ∈ ˜ A .2. For any y , y ∈ { s } ∪ ˜ A ∪ ˜ C , we have (cid:107) ˜ b − P y y ( α ) (cid:107) p > δ for all ˜ b ∈ ˜ B .3. For any y ∈ ˜ A ∪ ˜ B , we have (cid:107) ˜ c − P sy ( α ) (cid:107) p > δ for all ˜ c ∈ ˜ C . roof. Since we set γ to at least 4 δ , we have γ > δ . We first prove (1). For any y , y ∈{ s } ∪ ˜ B ∪ ˜ C we have y [9 d + 2] ≥ γ and y [9 d + 2] ≥ γ . Therefore for any α ∈ [ − , ]we have P y y ( α )[9 d + 2] ≥ γ . For any ˜ a ∈ ˜ A we have ˜ a [9 d + 2] = 0. Hence we obtain (cid:107) ˜ a − P ˜ y ˜ y ( α ) (cid:107) p ≥ | ˜ a [9 d + 2] − P y y ( α )[9 d + 2] | ≥ γ > δ .We now make a symmetric argument for (2). For any y , y ∈ { s }∪ ˜ A ∪ ˜ C we have y [9 d +2] ≤ γ and y [9 d + 2] ≤ γ . Therefore for any α ∈ [ − , ] we have P y y ( α )[9 d + 2] ≤ γ . For any˜ b ∈ ˜ B we have ˜ b [9 d +2] = γ . Like earlier we obtain (cid:107) ˜ b − P ˜ y ˜ y ( α ) (cid:107) p ≥ | ˜ b [9 d +2] − P y y ( α )[9 d +2] | ≥ γ > δ .We now show (3). For this we state a simple observation. Observation 4.16. For any α ∈ [ − , ] we have, • (cid:107) c (cid:48)(cid:48) − P b (cid:48)(cid:48) s (cid:48)(cid:48) ( α ) (cid:107) p ≥ γ > δ . • (cid:107) c (cid:48)(cid:48) − P a (cid:48)(cid:48) s (cid:48)(cid:48) ( α ) (cid:107) p ≥ γ > δ .Proof. Observe that c (cid:48)(cid:48) − P b (cid:48)(cid:48) s (cid:48)(cid:48) ( α ) = [ − ( − α ) γ , ( α − ) γ , − ( + α ) γ ] c (cid:48)(cid:48) − P a (cid:48)(cid:48) s (cid:48)(cid:48) ( α ) = [ − ( − α ) γ , − ( α − ) γ , − ( + α ) γ ]It follows that for α ∈ [ − , ] we have (cid:107) c (cid:48)(cid:48) − P b (cid:48)(cid:48) s (cid:48)(cid:48) ( α ) (cid:107) p ≥ max (cid:0) | ( α − (cid:1) γ | , | ( + α ) γ | ) = γ · max( | α − | , | + α | ) = γ (cid:107) c (cid:48)(cid:48) − P a (cid:48)(cid:48) s (cid:48)(cid:48) ( α ) (cid:107) p ≥ max (cid:0) | ( α − ) γ | , | ( + α ) γ | (cid:1) = γ · max( | α − | , | + α | ) = γ Again since we set γ to at least 4 δ , we have γ > δ .For any y ∈ ˜ A ∪ ˜ B , we define y (cid:48)(cid:48) = a (cid:48)(cid:48) if y = ˜ a ∈ ˜ A and y (cid:48)(cid:48) = b (cid:48)(cid:48) if y = ˜ b ∈ ˜ B . Then byObservation 4.16 we have (cid:107) ˜ c − P sy ( α ) (cid:107) p ≥ (cid:107) c (cid:48)(cid:48) − P y (cid:48)(cid:48) s (cid:48)(cid:48) ( α ) (cid:107) p > δ . This finishes the proof ofLemma 4.15, and thus of Theorem 1.2. ∀∀∃ -OV Hypothesis The ∀∀∃ -OV hypothesis, that we introduced in this paper, is a special case of the followingmore general hypothesis (by setting k = 3 and Q = Q = ∀ ). Quantified- k -OV Hypothesis: Problem: Fix quantifiers Q , . . . , Q k − ∈ {∀ , ∃} . Given sets A , . . . , A k ⊆ { , } d of size n , determine whether Q a ∈ A : . . . Q k − a k − ∈ A k − : ∃ a k ∈ A k such that a , . . . , a k are orthogonal. Hypothesis: For any k ≥ 1, any Q , . . . , Q k − , and any ε > O ( n k − ε ).These problems were studied by Gao et al. [15], who showed that (even for every fixed k and Q , . . . , Q k − ) the Quantified- k -OV hypothesis implies the 2-OV hypothesis. Unfortunately,there is no reduction known in the opposite direction. In fact, Carmosino et al. [8] establishedbarriers for a reduction in the other direction, see also the discussion of the Hitting Set hy-pothesis in [3]. Hence, we cannot base the hardness of Quantified- k -OV on the more standard k -OV hypothesis.It is well-known that the following Strong Exponential Time Hypothesis implies the k -OVhypothesis [24]. Strong Exponential Time Hypothesis (SETH) [20]: Problem: Given a q -CNF formula φ over variables x , . . . , x n , determine whether there exist x , . . . , x n such that φ evaluates to The Hitting Set problem considered in [3] is equivalent to ∀∃ -OV. Hypothesis: For any ε > q ≥ O (2 (1 − ε ) n ).Similarly, we can pose a hypothesis for Quantified Satisfiability, that implies the Quantified- k -OV hypothesis (by essentially the same proof as in [24]). Quantified-SETH: Problem: Given a q -CNF formula φ over variables x , . . . , x n , determinewhether for all x , . . . , x α (1) n there exist x α (1) n +1 , . . . , x α (2) n such that ... such that for all x α (2 s ) n +1 , . . . , x α (2 s +1) n there exist x α (2 s +1) n +1 , . . . , x n such that φ evaluates to true. Hypothesis: For any s ≥ 0, any 0 ≤ α (1) < . . . < α (2 s + 1) < 1, and any ε > q ≥ O (2 (1 − ε ) n ).Although Quantified Satisfiability is one of the fundamental problems studied in complexitytheory (known to be PSPACE-complete), no algorithm violating Quantified-SETH is known.Hence, Quantified-SETH and the Quantified- k -OV hypothesis are two hypotheses that areeven stronger than the ∀∀∃ -OV hypothesis that we used in this paper to prove a conditionallower bound. The fact that even these stronger hypotheses have not been falsified in decadesof studying these problems, we view as evidence that the ∀∀∃ -OV hypothesis is a plausibleconjecture. References [1] M. A. Abam, M. de Berg, P. Hachenberger, and A. Zarei. Streaming algorithms for linesimplification. Discrete & Computational Geometry , 43(3):497–515, 2010.[2] A. Abboud, R. R. Williams, and H. Yu. More applications of the polynomial method toalgorithm design. In SODA , pages 218–230. SIAM, 2015.[3] A. Abboud, V. V. Williams, and J. R. Wang. Approximation and fixed parameter sub-quadratic algorithms for radius and diameter in sparse graphs. In SODA , pages 377–391.SIAM, 2016.[4] P. K. Agarwal, S. Har-Peled, N. H. Mustafa, and Y. Wang. Near-linear time approximationalgorithms for curve simplification. Algorithmica , 42(3-4):203–219, 2005.[5] H. Alt and M. Godau. Computing the Fr´echet distance between two polygonal curves. Internat. J. Comput. Geom. Appl. , 5(1–2):78–99, 1995.[6] G. Barequet, D. Z. Chen, O. Daescu, M. T. Goodrich, and J. Snoeyink. Efficiently ap-proximating polygonal paths in three and higher dimensions. Algorithmica , 33(2):150–167,2002.[7] K. Buchin, M. Buchin, M. Konzack, W. Mulzer, and A. Schulz. Fine-grained analysis ofproblems on curves. EuroCG, Lugano, Switzerland , 2016.[8] M. L. Carmosino, J. Gao, R. Impagliazzo, I. Mihajlin, R. Paturi, and S. Schneider. Non-deterministic extensions of the strong exponential time hypothesis and consequences fornon-reducibility. In ITCS , pages 261–270. ACM, 2016.[9] W. Chan and F. Chin. Approximation of polygonal curves with minimum number ofline segments or minimum error. International Journal of Computational Geometry &Applications , 06(01):59–77, 1996.[10] D. Z. Chen, O. Daescu, J. Hershberger, P. M. Kogge, N. Mi, and J. Snoeyink. Polygonalpath simplification with angle constraints. Comput. Geom. , 32(3):173–187, 2005.2411] M. de Berg, M. van Kreveld, and S. Schirra. Topologically correct subdivision simplifi-cation using the bandwidth criterion. Cartography and Geographic Information Systems ,25(4):243–257, 1998.[12] D. H. Douglas and T. K. Peucker. Algorithms for the reduction of the number of pointsrequired to represent a digitized line or its caricature. Cartographica , 10(2):112–122, 1973.[13] R. Estkowski and J. S. B. Mitchell. Simplifying a polygonal subdivision while keeping itsimple. In Symposium on Computational Geometry , pages 40–49. ACM, 2001.[14] S. Funke, T. Mendel, A. Miller, S. Storandt, and M. Wiebe. Map simplification withtopology constraints: Exactly and in practice. In ALENEX , pages 185–196. SIAM, 2017.[15] J. Gao, R. Impagliazzo, A. Kolokolova, and R. R. Williams. Completeness for first-orderproperties on sparse structures with algorithmic applications. In SODA , pages 2162–2181.SIAM, 2017.[16] M. Godau. A natural metric for curves - computing the distance for polygonal chains andapproximation algorithms. In STACS 91 , pages 127–136. Springer Berlin Heidelberg, 1991.[17] L. J. Guibas, J. Hershberger, J. S. B. Mitchell, and J. Snoeyink. Approximating poly-gons and subdivisions with minimum link paths. In ISA , volume 557 of Lecture Notes inComputer Science , pages 151–162. Springer, 1991.[18] J. Hershberger and J. Snoeyink. An O ( n log n ) implementation of the douglas-peuckeralgorithm for line simplification. In Symposium on Computational Geometry , pages 383–384. ACM, 1994.[19] H. Imai and M. Iri. Polygonal approximations of a curve — formulations and algorithms.In G. T. Toussaint, editor, Computational Morphology , volume 6 of Machine Intelligenceand Pattern Recognition , pages 71 – 86. North-Holland, 1988.[20] R. Impagliazzo, R. Paturi, and F. Zane. Which problems have strongly exponential com-plexity? J. Comput. Syst. Sci. , 63(4):512–530, 2001.[21] A. Melkman and J. O’Rourke. On polygonal chain approximation. In G. T. Toussaint, edi-tor, Computational Morphology , volume 6 of Machine Intelligence and Pattern Recognition ,pages 87 – 95. North-Holland, 1988.[22] M. J. van Kreveld, M. L¨offler, and L. Wiratma. On optimal polyline simplification usingthe hausdorff and fr´echet distance. In Symposium on Computational Geometry , volume 99of LIPIcs , pages 56:1–56:14. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2018.[23] V. Vassilevska Williams. On some fine-grained questions in algorithms and complexity. In Proceedings of the ICM , 2018.[24] R. Williams. A new algorithm for optimal constraint satisfaction and its implica-tions. In