Approximating the packedness of polygonal curves
AApproximating the packedness of polygonal curves
Joachim Gudmundsson
The University of Sydney, Australia
Yuan Sha
The University of Sydney, Australia
Sampson Wong
The University of Sydney, Australia
Abstract
In 2012 Driemel et al. [18] introduced the concept of c -packed curves as a realistic input model.In the case when c is a constant they gave a near linear time (1 + ε )-approximation algorithm forcomputing the Fréchet distance between two c -packed polygonal curves. Since then a number ofpapers have used the model.In this paper we consider the problem of computing the smallest c for which a given polygonalcurve in R d is c -packed. We present two approximation algorithms. The first algorithm is a2-approximation algorithm and runs in O ( dn log n ) time. In the case d = 2 we develop a fasteralgorithm that returns a (6 + ε )-approximation and runs in O (( n/ε ) / polylog( n/ε ))) time.We also implemented the first algorithm and computed the approximate packedness-value for 16sets of real-world trajectories. The experiments indicate that the notion of c -packedness is a usefulrealistic input model for many curves and trajectories. Theory of computation → Design and analysis of algorithms
Keywords and phrases
Computational geometry, trajectories, realistic input models
Worst-case analysis often fails to accurately estimate the performance of an algorithm forreal-world data. One reason for this is that the traditional analysis of algorithms and datastructures is only done in terms of the number of elementary objects in the input; it does nottake into account their distribution. Problems with traditional analysis have led researchersto analyse algorithms under certain assumptions on the input [16], which are often satisfiedin practice. By doing this, complicated hypothetical inputs are hopefully precluded, and theworst-case analysis yields bounds which better reflect the behaviour of the algorithms inpractical situations.In computational geometry, realistic input models were introduced by van der Stappenand Overmars [39] in 1994. They studied motion planning among fat obstacles. Since thena range of models have been proposed, including uncluttered scenes [15], low density [40],simple-cover complexity [33], to name a few. De Berg et al. [16] gave algorithms for computingthe model parameters for planar polygonal scenes. In their paper they motivated why suchalgorithms are important.To verify whether a certain model is appropriate for a certain application domain.Some algorithms require the value of the model parameter as input in order to workcorrectly, e.g. the range searching data structure for fat objects developed by Overmarsand van der Stappen [35].Computing the model parameters of a given input can be useful for selecting the algorithmbest tailored to that specific input.In this paper we will study polygonal curves in R d . The Fréchet distance [22] is probablythe most popular distance measure for curves. In 1995, Alt and Godau [3] presented an a r X i v : . [ c s . C G ] S e p Packedness of curves O ( n log n ) time algorithm for computing the Fréchet distance between two polygonal curvesof complexity n . This was later improved by Buchin et al. [29] who showed that the continuousFréchet distance can be computed in O ( n √ log n (log log n ) / ) expected time. Any attemptto find a much faster algorithm was proven to be futile when Bringmann [6] showed that,assuming the Strong Exponential Time Hypothesis, the Fréchet distance cannot be computedin strongly subquadratic time, i.e., in time O ( n − ε ) for any ε > c -packed curves, which since then hasgained considerable attention [11, 17, 19, 27, 28]. A curve π is c -packed if for any ball B ,the length of the portion of π contained in B is at most c times the radius of B . In theirpaper they considered the problem of computing the Fréchet distance between two c -packedcurves and presented a (1 + ε )-approximation algorithm with running time O ( cnε + cn log n ),which was later improved to O ( cn √ ε log (1 /ε ) + cn log n ) by Bringmann and Künnemann [7].Other models for realistic curves have also been studied. Closely related to c -packednessis γ -density which was introduced by Van der Stappen et al. [40] for obstacles, and modifiedto polygonal curves in [18]. A set of objects is γ -low-density, if for any ball of any radius, thenumber of objects intersecting the ball that are larger than the ball is less than γ . Aronovet al. [5] studied so-called backbone curves, which are used to model protein backbones inmolecular biology. Backbone curves are required to have, roughly, unit edge length and agiven minimal distance between any pair of vertices. Alt et al. [4] introduced κ straightcurves, which are curves where the arc length between any two points on the curve is atmost a constant κ times their Euclidean distance. They also introduced κ -bounded curveswhich is a generalization of κ -straight curves. It has been shown [2] that one can decide in O ( n log n ) time whether a given curve is backbone, κ -straight or κ -bounded.From the above discussion and the fact that the c -packed model has gained in popularity,we study two natural and important questions in this paper. Given a curve π , how fast can one (approximately) decide the smallest c for which π is c -packed? Are real-world trajectory data c -packed for some reasonable value of c ?Vigneron [41] gave an FPTAS for optimizing the sum of algebraic functions. The algorithmcan be applied to compute a (1 + ε ) approximation of the c -packedness value of a polygonalcurve in R d in O (( nε ) d +2 log d +2 nε ) time.However, working with balls is complicated (see Section 1.1) and in this paper we willtherefore consider a simplified version of c -packedness. Instead of balls we will use ( d -)cubes,that is, we say that a curve π is c -packed if for any cube S , the length of the portion of π contained in S is at most c · r , where r is half the side length of S . Note that under thisdefinition, a c -packed curve using the “ball” definition is a ( √ dc )-packed curve in the “cube”’definition, while a c -packed curve using the “cube” definition is also a c -packed curve in the“ball” definition. From now on we will use the “cube” definition of c -packed curves.To the best of our knowledge the only known algorithm for computing packedness ofa polygonal curve, apart from applying the tool by Vigneron [41], is by Gudmundsson etal. [25] who gave a cubic time algorithm for polygonal curves in R . They consider theproblem of computing “hotspots” for a given polygonal curve, but their algorithm can alsocompute the packedness of a polygonal curve. We provide two sub-cubic time approximationalgorithms for the packedness of a polygonal curve.Our first result is a simple O ( dn log n ) time 2-approximation algorithm for d -dimensionalpolygonal curves. We also implemented this algorithm and tested it on 16 data sets to udmundsson et al. 3 estimate the packedness value for real-world trajectory data. As expected the value varieswildly both between different data sets but also within the same data set. However, about halfthe data sets had an average packedness value less than 10, which indicates that c -packednessis a useful and realistic model for many real-world data sets.Our second result is a faster O ∗ ( n / ) time (6+ ε )-approximation algorithm for polygonalcurves in the plane. We achieve this faster algorithm by applying Callahan and Kosaraju’sWell-Separated Pair Decomposition (WSPD) to select O ( n ) squares, and then approximatingthe packedness values of these squares with a multi-level data structure. Note that ourapproach of building a data structure and then performing a linear number of squarepackedness queries solves a generalised instance of Hopcroft’s problem. Hopcroft’s problemasks: Given a set of n points and n lines in the plane, does any point lie on a line? AnΩ( n / ) lower bound for Hopcroft’s problem was given by Erickson [20]. Hence, it is unlikelythat our approach, or a similar approach, can lead to a considerably faster algorithm. Let π = h p , . . . , p n i be a polygonal curve in R d and let s i = ( p i , p i +1 ) for 1 (cid:54) i < n . Let H be a closed convex region in R d . The function Υ( H ) = P n − i =1 | s i ∩ H | describes the totallength of the trajectory π inside H . In the original definition of c -packedness H is a ball.As mentioned in the introduction, we will consider H to be an axis-aligned cube insteadof a ball. The reason for our choice was argued for R in [25], and for completeness weinclude their arguments here.If H is a square, then each piece of Υ( H ) is a simple linear function, i.e. is of the form γ ( x ) = ax + b for some a, b ∈ R . The description of each piece of Υ is constant size andcan be evaluated in constant time. However, if H is a disc, the intersection points of theboundary of H with the trajectory π are no longer simple linear equations in terms of thecenter and radius of H , so that Υ becomes a piecewise the sum of square roots of polynomialfunctions. These square root functions provide algebraic issues that cannot be easily resolvedfor maximising the function Υ( H ) /r . For this reason, we will consider H to be a squareinstead of a disc.The function Υ( H ) = P n − i =1 | s i ∩ H | describes the total length of the polygonal curveinside H . Similarly, Ψ( H ) = Υ( H ) /r denotes the packedness value of H . Our aim is to finda cube H ∗ with centre at p ∗ and radius r ∗ that has the maximum packedness value for agiven polygonal curve π . The radius of a cube is half the side length of the cube.The following two theorems summarise the main results of this paper. (cid:73) Theorem 1.
Given a polygonal curve π of size n in R d , one can compute a -approximatepackedness value for π in O ( dn log n ) time. (cid:73) Theorem 2.
Given a polygonal curve π of size n in R and a constant ε , with < ε (cid:54) ,one can compute a (6 + ε ) -approximate packedness value for π in O (( n/ε ) / polylog( n/ε )) time. Theorem 1 is presented in Section 2 and Theorem 2 is presented in Section 3. Experimentalresults on the packedness values for real world data sets are given in Section 2.1. The O ∗ -notation omits polylog and 1 /ε factors. Packedness of curves -approximation algorithm Given a polygonal curve π in R d , let H ∗ be a d -cube with centre at p ∗ and radius r ∗ thathas a maximum packedness value. Our approximation algorithm builds on two observation.The first observation is that given a center p ∈ R d one can in O ( dn log n ) time find, of allpossible d -cubes centered at p , the d -cube that has the largest packedness value. The secondobservation is that there exists a d -cube centered at a vertex of π that has a packednessvalue that is at least half the packedness value of H ∗ .Before we present the algorithm we need some notations. Let H pr be the d -cube H , scaledwith p as center and such that its radius is r . Fix a point p in R d , and consider Ψ as afunction of r . More formally, let ψ p ( r ) = Ψ( H pr ). Gudmundsson et al. [25] showed propertiesof ψ p ( r ) that we generalize to R d and restate as: (cid:73) Lemma 3.
The function ψ p ( r ) is a piecewise hyperbolic function. The pieces of ψ p ( r ) areof the form a (1 /r ) + b , for a, b ∈ R , and the break points of ψ p ( r ) correspond to d -cubes H where: (i) a vertex of π lies on a ( d − -face of H , or (ii) a ( d − -face (in R , an edge) of H intersects an edge of π . As a corollary we get: (cid:73)
Corollary 4.
Let r , r be the radii of two consecutive break points of ψ p ( r ) , where r > r .It holds that max r ∈ [ r ,r ] ψ p ( r ) = max { ψ p ( r ) , ψ p ( r ) } , that is, the maximum value is obtainedeither at r or at r . Proof.
According to Lemma 3 the function ψ p ( r ) is a hyperbolic function in the range r ∈ [ r , r ] of the form a (1 /r ) + b with the derivative − a/r . This implies that ψ p ( r ) is amonotonically decreasing or monotonically increasing function in [ r , r ]. As a result themaximum value of ψ p ( r ) is attained either at r or at r . (cid:74) Next we state the algorithm for the first observation. The general idea is to use plane-sweep, scaling d -cube H with centre at p by increasing its radius from 0 to ∞ . The ( d − H are bounded by 2 d hyperplanes in R d . When H expands from p , it can first meet asegment of π in one of two ways: (i) a vertex of the segment lies on one of H ’s ( d − d − H . For the first case, the newsegment can change to intersect a different ( d − H at most d − H and its components in all d dimensions.Similarly for a segment of the second case, it can change to intersect a different ( d − H O ( d ) times. Thus each segment has O ( d ) event points and there are O ( dn ) events intotal. Sort the events by their radii r , . . . , r m ( m = O ( dn )) in increasing order. Perform thesweep by increasing the radius r starting at r = 0 and continue until all events have beenencountered.Recall that Υ( H ) is the total length of the trajectory π inside H . For each r i , 1 (cid:54) i (cid:54) m ,we can compute ψ p = Υ( r i ) /r in time O ( dn ). For two consecutive radii r i and r i +1 , Υ( H pr i )and Υ( H pr i +1 ) can differ in one of three ways. First, H pr i +1 may include a vertex not in H pr i ,in which case the set of contributing edges may increase by up to two. Second, H pr i +1 mayintersect an edge not in H pr i . Finally, an edge in H pr i may intersect a different ( d − r i , r i +1 ) that describes these changes in constant time. Wethen have Υ( H pr i +1 ) = Υ( H pr i ) + ∆( r i , r i +1 ), and we can compute Υ( H pr i +1 ) from Υ( H pr i ) inconstant time (in R similar to [13]). Apart from sorting the event points, we compute ψ p ( r )for every r i , 1 (cid:54) i (cid:54) m , in O ( dn ) time. We return radius arg max r (cid:54) r i (cid:54) r m ψ p ( r i ) as theresult. Hence, the total running time is O ( dn log n ). udmundsson et al. 5 Note that the break points of ψ p ( r ) are the event points. The correctness followsimmediately from Corollary 4 which tells us that we only need to consider the set of eventpoints. To summarise we get: (cid:73) Lemma 5.
Given a point p in R d one can in O ( dn log n ) time determine the radius r > such that Ψ( H pr ) = max r > ψ p ( r ) . Now we are ready to prove the second observation. (cid:73)
Lemma 6.
Consider the function ψ p ( r ) for a single segment s , i.e. ψ p ( r ) = | s ∩ H pr | r . If thefirst point on s encountered by H is an interior point of s then the function is non-decreasingfrom r = 0 until H pr encounters a vertex of s . Proof.
The function is zero until an interior point on s is encountered. After encountering theinterior point and before encountering a vertex of s , the segment | s ∩ H pr | is a chord betweentwo boundary points of H pr . Suppose we normalise the size of the d -cube H pr to be unit-sized.Then the length of the chord is normalised to | s ∩ H pr | r = ψ p ( r ). Before normalisation, thesegment | s ∩ H pr | had fixed gradient, and had fixed orthogonal distance to the center p . Afternormalisation, the chord has fixed gradient and has decreasing distance to the center p .Therefore its length ψ p ( r ) is non-decreasing as it approaches the diameter of H pr . (cid:74)(cid:73) Lemma 7.
There exists a d -cube H with center at a vertex of π such that Ψ( H ) ≥ · Ψ( H ∗ ) ,where H ∗ is the d -cube having the highest packedness value for π . Proof.
Consider H ∗ . We will construct a d -cube H that is centered at a vertex p of π andcontains H ∗ . We will then prove that H has packedness value at least · Ψ( H ∗ ), whichwould prove the theorem. To construct H , we consider two cases: Case 1:
The square H ∗ does not contain a vertex of π , see Fig. 1(a). Scale H ∗ until itsboundary hits a vertex. Let H denote the d -cube obtained from the scaling and let v be thevertex on the ( d − H . According to Lemma 6, we know that Ψ( H ) ≥ Ψ( H ∗ ).Let H be the d -cube centered at v with radius twice the radius of H , as illustrated inFig. 1(a). Clearly H contains H ∩ π , so Ψ( H ) ≥ Ψ( H ) ≥ Ψ( H ∗ ), as required. Case 2:
The d -cube H ∗ contains one or more vertices, see Fig. 1(b). Let v be a vertex inside H ∗ . Let H be the d -cube with center at v and radius twice that of H ∗ . Again, we have H completely contains H ∗ ∩ π , so Ψ( H ) ≥ Ψ( H ∗ ), as required.In both cases, we have constructed a d -cube H centered at a vertex of π for whichΨ( H ) ≥ Ψ( H ∗ ), which proves the lemma. (cid:74) vH ∗ H H (a) (b) vH H ∗ Figure 1
Illustrating the two cases in the proof of Lemma 7: Case 1 in (a) and Case 2 in (b).
Packedness of curves
Dataset c/n
Vessel-Y 187 320 2.37 14.28 3.03 0.022Hurdat 1785 133 2 16.58 3.24 0.154Pen 2858 182 4.07 20.82 8.79 0.073Bats 545 736 1.08 29.52 3.60 0.0625Bus 148 1012 3.21 34.99 14.70 0.052Vessel-M 103 143 1.04 46.19 4.66 0.272Basketball 20780 138 2.00 48.65 3.95 0.092Football 18028 853 2.12 48.87 7.66 0.045Truck 276 983 5.32 110.44 25.48 0.079Buffalo 163 479 1.17 254.14 68.42 0.505Pigeon 131 1504 3.64 275.18 90.93 0.12Geolife 1000 64390 1.02 858.19 23.31 0.057Gull 241 3237 1.02 1082.20 139.50 0.478Cats 152 2257 6.04 1122.77 207.86 0.655Seabirds 63 2970 5.59 1803.72 825.57 0.388Taxi 1000 115732 2.20 4255.23 55.38 0.313
Table 1
The table lists 16 real-world data sets. The second and third columns shows the numberof curves and the maximum complexity of a curve in the set. The following three columns lists theminimum, maximum and average approximate packedness values. The rightmost column states theaverage ratio between c and n for the data sets. We implemented the above algorithm to test the approximate packedness of real-worldtrajectory data. We ran the algorithm on 16 data sets. The data sets were kindly providedto us by the authors of [26]. Table 2 summarises the data sets and is taken from [26]. Theminimum/maximum/average (approximate) packedness values and the ratio between c and n for each dataset are listed in Table 1. Both the Geolife dataset and the Taxi dataset consistof over 20k trajectories, many of which are very large. For the experiments we randomlysampled 1,000 trajectories from each of these sets.Although these are only sixteen data sets, it is clear that the notion of c -packedness is areasonable model for many real-world data sets. For example, the maximal packedness valuefor all trajectories in all the first eight data sets is less than 50 and the average (approximate)packedness value is below 15. Looking at the ratio between c and n , we can see that formany data sets the value of c is considerably smaller than n .Consider the task of computing the continuous Fréchet distance between two trajectories.For two trajectories of complexity n , computing the distance will require O ∗ ( n ) time (evenfor an O (1)-approximation) while a (1 + ε )-approximation can be obtained for c -packedtrajectories in O ∗ ( cn ) time. Thus the algorithm by Driemel et al. [18] for c -packed curves islikely to be more efficient than the general algorithm for these data sets. (6 + ε ) -approximation algorithm In this section we will take a different approach to Section 2 to yield an algorithm thatconsiders a linear number of squares rather than a quadratic number of squares. First we willidentify a set S containing a linear number of squares that will include a square having a high udmundsson et al. 7 Data Set n d . . . . . . . . . . . . . . . . Table 2
Real data sets, showing number of input trajectories n , dimensions d , average numberof simplified vertices per trajectory, and a description. packedness value (Section 3.1). Then we will build a multi-level data structure (Section 3.2)on π such that given a square S ∈ S it can quickly approximate | S ∩ π | . To prove that it suffices to consider a linear number of squares we will use the well-knownWell-Separated Pair Decomposition (WSPD) by Callahan and Kosaraju [8].Let A and B be two finite sets of points in R d and let s > A and B are well-separated with respect to s , if there exist two disjoint balls C A and C B , such that (1) C A and C B have the same radius, (2) C A contains the bounding box of A and C B contains the bounding box of B , and (3) the distance between C A and C B is atleast s times the radius of C A and C B . The real number s is called the separation ratio. (cid:73) Lemma 8.
Let s > be a real number, let A and B be two sets in R d that are well-separatedwith respect to s , let a and a be two points in A , and let b and b be two points in B . Then(1) | aa | ≤ (2 /s ) · | ab | , and (2) | a b | ≤ (1 + 4 /s ) · | ab | . (cid:73) Definition 9.
Let S be a set of n points in R d , and let s > be a real number. A well-separated pair decomposition (WSPD) for S , with respect to s , is a sequence { A , B } , . . . , { A m , B m } of pairs of non-empty subsets of S , for some integer m , such that: for each i with ≤ i ≤ m , A i and B i are well-separated with respect to s , and for any two distinct points p and q of S , there is exactly one index i with ≤ i ≤ m ,such that p ∈ A i and q ∈ B i , or p ∈ B i and q ∈ A i . The integer m is called the size ofthe WSPD. (cid:73) Lemma 10. (Callahan and Kosaraju [8]) Given a set V of n points in R d , and given a realnumber s > , a well-separated pair decomposition for V , with separation ratio s , consistingof O ( s d n ) pairs, can be computed in O ( n log n + s d n ) time. Packedness of curves
Now we are ready to construct a set S of squares. Compute a well-separated pairdecomposition W = { ( A , B ) , . . . , ( A m , B m ) } with separation constant s = 720 /ε for thevertex set of π . For every well-separated pair ( A i , B i ) ∈ W , 1 ≤ i ≤ k , construct two squaresthat will be added to S as follows:Pick an arbitrary point a ∈ A i and an arbitrary point b ∈ B i . Construct one squarewith center at a and radius r , and one square with center at b and radius r , where r =max {| a.x − b.x | , | a.y − b.y |} + ε/ · | ab | . The two squares are added to S .It follows immediately from Lemma 10 that the number of squares in S is O ( n/ε ) andthat one can construct S in O ( n log n + n/ε ) time.To prove the approximation factor of the algorithm we will first need the followingtechnical lemma. (cid:73) Lemma 11.
Let H pr and H pr , with r > r , be two squares with centre at p such that H pr \ H pr contains no vertices of π in its interior. For any value r x , with r (cid:54) r x (cid:54) r , itholds that ψ p ( r x ) ≤ ψ p ( r ) + 2 · ψ p ( r ) . p r r x r I (a) (b) (c) p r p r d d IIII q θ E q θt Figure 2 (a) An illustration of the proof of Lemma 11 and the two types of segments that areconsidered. (b) Showing case 1, and (c) case 2 of of Type II segments.
Proof.
Let ψ p ( r ) = Υ( H pr ) r ,ψ p ( r ) = Υ( H pr ) r + M r , and ψ p ( r x ) = Υ( H pr ) r x + M x r x , where M = Υ( H pr ) − Υ( H pr ) and M x = Υ( H pr x ) − Υ( H pr ).It is clear that Υ( H pr ) r x < Υ( H pr ) r , so the first term of ψ p ( r x ) is bounded by ψ p ( r ). Nextwe consider the second term.There are no vertices of π inside H pr \ H pr . These segments either cross H pr \ H pr andhave no intersection with H pr , like segment I in Fig. 2(a), or intersect H pr and cross H pr \ H pr ,like segment II in Fig. 2(a).Let l x denote a segment’s contribution to M x . Let us consider how l x /r x changes when r x grows from r to r . Segment of Type I: We know from Lemma 6 that l x /r x is non-decreasing as r x growsfrom r to r . udmundsson et al. 9 Segment of Type II: Consider a segment s of Type II and the subsegment (or twosubsegments) s ∩ ( H pr \ H pr ). A subsegment s of s has one endpoint q on the boundaryof H pr and one endpoint on the boundary of H pr . Let E be the side of H pr containing q .Let t be the middle point of E , let d = | qt | and let θ be the acute angle between s and E . There are two cases.The subsegment s does not cross line pt in region H pr \ H pr , as in Figure 2(b). Let r x be the radius when s crosses a corner of H pr x . When r x < r x , l x r x = ∆ r sin θ r x = 1sin θ · ∆ rr + ∆ r increases strictly. When r x ≥ r x , l x r x = r x − d cos θ r x = 1cos θ · (1 − d r x )continues to increase strictly, so l x /r x (cid:54) l /r .The subsegment s crosses line pt in region H pr \ H pr , as shown in Figure 2(c). r x isdefined as above. When r x < r x , l x /r x increases strictly. When r x ≥ r x , l x r x = 1cos θ · r x + d r x = 1cos θ · (1 + d r x )begins to decrease. However, since d (cid:54) r (cid:54) r x (cid:54) r ,1cos θ · r x + d r x < θ · < · θ · (1 + d r ) = 2 · l r . For all cases, l x r x (cid:54) · l r . Thus M x r x (cid:54) · M r . We get ψ p ( r x ) < ψ p ( r ) + 2 · M r (cid:54) ψ p ( r ) + 2 ψ p ( r ) . (cid:74) Due to Lemma 7, it suffices to consider squares with center at a vertex of π to obtain a2-approximation. Combining this with Lemma 11, it suffices to consider squares with centerat a vertex of π and a vertex of π on its boundary to obtain a 6-approximation. Using theWSPD argument we have reduced our set of squares to a linear number of squares and wewill now argue that S must contain a square that has a high packedness factor. Let H ∗ be asquare with a maximum packedness value of π . (cid:73) Lemma 12.
There exists a square S ∈ S such that Ψ( S ) (cid:62) ε/ · Ψ( H ∗ ) . Proof.
From Lemmas 7 and 11 we know that there exists a square H with centre at a vertex p of π and whose boundary contains a vertex q of π such that Ψ( H ) ≥ · Ψ( H ∗ ) . Accordingto the construction of S there exists a square S ∈ S such that S has its centre at a point a ∈ A i and has radius r = max {| a.x − b.x | , | a.y − b.y |} + ε/ · | ab | , where b is a point in B i .By Lemma 8, we have | ap | ≤ ε/ · | ab | and | bq | ≤ ε/ · | ab | . If r H is the radiusof H , then r H = max {| p.x − q.x | , | p.y − q.y |} ≤ max {| a.x − b.x | , | a.y − b.y |} + | ap | + | bq | ≤ max {| a.x − b.x | , | a.y − b.y |} + ε/ · | ab | − | ap | = r − | ap | . But p is at most | ap | away from a in both the x and y directions, so H must be entirely contained inside S . So Υ( S ) ≥ Υ( H ). Next, we show S is not too much larger than H : r = max {| a.x − b.x | , | a.y − b.y |} + ε · | ab | (cid:54) r H + | ap | + | bq | + ε · | ab | (cid:54) r H + ε · | ab | (cid:54) r H + ε · (1 + ε
180 ) · | pq | (cid:54) r H + ε √ · (1 + ε
180 ) · r H (cid:54) (1 + ε
48 ) · r H . Putting this all together yields:Ψ( S ) = Υ( S ) /r ≥ ε/ · Υ( H ) /r H = 11 + ε/ · Ψ( H ) ≥
16 + ε/ · Ψ( H ∗ ) , which completes the lemma. (cid:74) The aim of this section is to develop an efficient data structure on π such that queried withan axis-aligned square S ∈ S the data structure returns an approximation of | S ∩ π | .The general idea of the multi-level data structure is that the first level is a modified 1Dsegment tree, similar to the hereditary segment tree [9]. We partition the set of π ’s segmentsinto four sets depending on their slope; ( −∞ , − − , ,
1) and [1 , ∞ ). In the rest ofthis section we will describe the data structure for the set of segments with slope in [0 , The description of the segment tree follows the description in [14]. Let L be the set of linesegments in π . For the purpose of the 1D segment tree, we can view L as a set of intervalson the line. Let p , . . . , p m be the list of distinct interval endpoints, sorted from left toright. Consider the partitioning of the real line induced by those points. The regions of thispartitioning are called elementary intervals . Thus, the elementary intervals are, from left toright: ( −∞ , p ) , [ p , p ] , ( p , p ) , [ p , p ] , . . . , ( p m − , p m ) , [ p m , p m ] , ( p m , + ∞ ).Given a set I of intervals, or segments, a segment tree T for I is structured as follows: T is a binary tree. Its leaves correspond to the elementary intervals induced by the endpoints in I . Theelementary interval corresponding to a leaf v is denoted Int ( v ). The internal nodes of T correspond to intervals that are the union of elementary intervals:the interval Int ( N ) corresponding to an internal node N is the union of the intervalscorresponding to the leaves of the tree rooted at N . That implies that Int ( N ) is theunion of the intervals of its two children. Each node or leaf v in T stores the interval Int ( v ) and a set of intervals, in some datastructure. This canonical subset of node v contains the intervals [ x, x ] from I such that[ x, x ] contains Int ( v ) and does not contain Int ( parent ( v )). That is, each node in T stores the set of segments F ( v ) that span through its interval, but do not span throughthe interval of its parent.The 1D segment tree can be built in O ( n log n ) time, using O ( n log n ) space and pointstabbing queries can be answered in O (log n + k ) time, where k is the number of segmentsintersecting the query point.We make one minor change to T that will increase the space usage to O ( n log n ) but itwill allow us to speed up interval stabbing queries. Each internal node v store, apart from udmundsson et al. 11 the set F ( v ), all the segments stored in the subtree rooted at v , including F ( v ). We denotethis set by L ( v ).The main benefit of this minor modification is that when an interval stabbing query isperformed only O (log n ) canonical subsets are required to identify all the segments intersectingthe interval. Next we show how to build associated data structures for L ( v ) and F ( v ) foreach internal node v in T . Consider querying the segment tree T with a square S ∈ S . There are three different casesthat can occur, and for each of these cases we will build an associated data structure. Thatis, each internal node will have three types of associated data structures. Consider a query S = [ x, x ] × [ y, y ] and let µ l and µ r be the leaf nodes in T where the search for the boundaryvalues x and x end. See Figure 3 for an illustration of the search and the three cases. Aninternal node v is one of the following types: Type A: if Int ( v ) ⊆ [ x, x ], Type B: if Int ( v ) ∩ [ x, x ] = ∅ , Int ( v ) (cid:42) [ x, x ] and [ x, x ] (cid:42) Int ( v ), or Type C: if [ x, x ] ⊂ Int ( v ). split node µ l µ r Type B
Int ( v ) Q Type C
QInt ( v ) T Type A
SInt ( v ) S S
Figure 3
The primary tree T , and the three types of nodes in T that can be encountered duringa query. Associated data structure for Type A nodes:
For a Type A node we need to compute the length of all segments stored in the subtree withroot v in the y -interval [ y, y ]. Let s , . . . , s m be the set of m segments stored in L ( v ), and let Y = h y , . . . , y m i denote the y -coordinates of the endpoints of the segments in L ( v ) orderedfrom bottom-to-top. To simplify the description we assume that the values are distinct.Let δ ( y ) denote the total length of the segments in L ( v ) below y . For two consecutive y -values y i and y i +1 , the set of edges contributing to δ ( y i ) and δ ( y i +1 ) has increased ordecreased by one. So, we can compute a function ∆( y i , y i +1 ) that describes these changesin constant time. We then have δ ( y i +1 ) = δ ( y i ) + ∆( y i , y i +1 ), and thus we can compute δ ( y i +1 ) from δ ( y i ) in constant time after sorting the events. Hence, we can compute all the δ ( y i )-values and all the ∆( y i , y i +1 ) in time O ( m log m ).Given a y -value y one can compute δ ( y ) as δ ( y i ) + y − y i y i +1 − y i · ∆( y i , y i +1 ), where y i is thelargest y -value in Y smaller than y . Hence, our associated data structure for Type A nodes is a binary tree with respect to the values in Y , where each leaf stores the value y i , δ ( y i )and ∆( y i , y i +1 ). The tree can be computed in O ( m log m ) time using linear space, and cananswer queries in O (log m ) time. (cid:73) Lemma 13.
The associated data structures for Type A nodes in T can be constructed in O ( n log n ) time using O ( n log n ) space. Given a query square S for an associated datastructure of Type A stored in an internal node v , the value | L ( v ) ∩ S ∩ Int ( v ) | is returned intime O (log n ) . Associated data structure for Type B nodes:
The associated data structure for a Type B node is built to handle the case when the querysquare S = [ x, x ] × [ y, y ] intersects either the left boundary ( x l ) or the right boundary ( x r )of Int ( v ), but not both. The two cases are symmetric and we will only describe the casewhen S intersects the right boundary.The data structure returns a value M that is an upper bound on the length of thesegments of F ( v ) within S and a lower bound on the segments within S + , where S + is aslightly expanded version of S . See Figure 4(c). Formally, S + = [ x − ε/ · | x r − x | , x + ε/ ·| x r − x | ] × [ y − ε/ · | x r − x | , y + ε/ · | x r − x | ].If S + spans Int ( v ) then we need to use a binary tree on F ( v ) to answer the query inlogarithmic time, similar to the associated data structure for Type A nodes.If S + does not span Int ( v ) then the query is performed on the Type B associated datastructures. We first show how to construct these data structures, then we show how to handlethe query. Recall that all the segments in F ( v ) span the interval Int ( v ). Let s , . . . , s m be the set of m segment in F ( v ) and let µ ( s i ) be the angle of inclination of s i . Theangle of inclination for segments with slope in the interval [0 ,
1) is in the interval [0 , π/ F ( v ) into κ sets F ( v ) , . . . , F κ ( v ) such that for any segment s i ∈ F j ( v ) it holdsthat ( j − · π κ ≤ µ ( s i ) < j · π κ .Consider one such partition F j ( v ) = { s j , . . . , s jm j } . Build a balanced binary search tree T jr on the y -coordinates of the right endpoints of the segments in F j ( v ). The data structurecan be constructed in O ( m j log m j ) time using linear space. Given a y -interval ‘ as a query,the number of right endpoints in T jr within ‘ can be reported in time O (log m j ). From theabove description and the fact that P v ∈ T F ( v ) = O ( n log n ) it immediately follows that thetotal construction time for all the Type B nodes is O ( n log n ) and the total amount of spacerequired is O ( n log n ). This completes the construction of the data structures on F ( v ).It remains to show how to handle a query, i.e. how to compute M . We focus first oncomputing an M that upper bounds | F ( v ) ∩ S | , and we later prove that M lower bounds | F ( v ) ∩ S + | . There are two steps in computing M . The first step is to count the numberof segments that intersect S . The second step is to multiply this count by the maximum possible length of intersection between the segment and S . This would clearly yield an upperbound on | F ( v ) ∩ S | . To obtain suitable maximum lengths in the second step, we need tosubdivide the partitions F j further.The right endpoints must lie in a y -interval given by I j = [ y, y + ¯ y ], where y, y are the y -coordinates of the bottom and top boundaries of S , and ¯ y = ( x r − x ) · tan( j · π κ ). Subdivide I j into three subintervals: I j = [ y, y + ¯ y ), I j = [ y + ¯ y, y ) and I j = [ y , y + ¯ y ), see Fig. 4(b).Further subdivide I j and I j into 2 κ subintervals of ‘ , . . . , ‘ κ and ‘ , . . . , ‘ κ of equal length.Hence we partitioned I j into a set L j of 2 κ + 1 subintervals. µ ( s i ) is the arctan of the slope in the interval [0 , udmundsson et al. 13 Given these subdivisions L j , our first step is to simply perform a range counting queryin T jr for each ‘ ∈ L j . Our second step is to multiply this count by the maximum lengthof intersection between S and any segment in F j with its right endpoint in ‘ . The productof these two values is clearly an upper bound on the length of intersection between S andsegments in F j with right endpoint in ‘ . Finally, we sum over all subdivisions ‘ ∈ F j andthen over all partitions F j to obtain a value M that upper bounds | S ∩ F ( v ) | . The timerequired to handle a query is O ( κ · κ · log m ). It remains only to prove M ≤ | F ( v ) ∩ S + | . yy (cid:48) Sx l x r x x (cid:48) I j y + ¯ yy (cid:48) + ¯ y SS + I j I j I j (a) (b) (c) Figure 4 (a) A query S and the set F ( v ). (b) Illustrating the three interval I j , I j and I j . (c)The expanded square S + . By setting κ = 16 √ /ε and κ = 16 /ε we can prove the following. (cid:73) Lemma 14. M ≤ | F ( v ) ∩ S + | Proof.
Consider an arbitrary interval ‘ ∈ L j . Let A be a possible segment in F j ( v ) with rightendpoint on ‘ that maximises | A ∩ S | , and let B be any segment in F j ( v ) with right endpointon ‘ . It suffices to prove that | A ∩ S | ≤ | B ∩ S + | . We will have three cases depending on theposition of ‘ . Let B = B ∩ S and let B = B ∩ ( S + \ S ). ‘ ∈ I j : If the right endpoint of B lies above A then we can move B vertically downwarduntil the right endpoint coincides with the right endpoint of A . This will not increase | B ∩ S + | .If B has a smaller angle of inclination than A then | A ∩ S |−| B | ≤ √ κ ·| x r − x | = ε ·| x r − x | .However, the length of B is at least ε/ · | x r − x | , hence | B ∩ S + | = | B | + | B | ≥ | A ∩ S | − ε · | x r − x | + ε · | x r − x | > | A ∩ S | . If B has a greater angle of inclination than A then let p be the left endpoint of A ∩ S .Let x p be the x -coordinate of p and let p be the point on B with x -coordinate x p . Thedistance between p and p is bounded by ( κ + κ ) · | x r − x | < ε · | x r − x | , hence p ∈ S + ,and it immediately follows that | B ∩ S + | > | A ∩ S | . ‘ ∈ I j : In this case the left endpoint of A ∩ S will be on the left boundary of S .Similarly, the left endpoint of B will lie on the left boundary of S . We know that | A ∩ S | − | B | ≤ √ κ · | x r − x | = ε · | x r − x | . However, the length of B is at least ε/ · | x r − x | , hence | B ∩ S + | ≥ | B | + | B | ≥ | A ∩ S | − ε · | x r − x | + ε/ · | x r − x | > | A ∩ S | .‘ ∈ I J : This case is very similar to the first case and left as an exercise for the reader.This shows that | A ∩ S | ≤ | B ∩ S + | when κ ≤ √ /ε and κ ≤ /ε , which proves thelemma. (cid:74) We summarise the associated data structures for Type B nodes with the following lemma. (cid:73)
Lemma 15.
The associated data structure for Type B nodes can be constructed in O ( n log n ) time using O ( n log n ) space. Given a query square S = [ x, x ] × [ y, y ] foran associated data structure of Type B stored in an internal node v , a real value M ( v ) isreturned in O ( ε · log n ) time such that: | F ( v ) ∩ S | ≤ M ( v ) ≤ | F ( v ) ∩ S + | , where S + = ([ x − ε/ · w, x + ε/ · w ]) × [ y − ε/ · w, y + ε/ · w ] and w is the width of S ∩ Int ( v ) . Associated data structure for Type C nodes:
The associated data structure for Type C nodes have some similarities with the associateddata structures for Type B nodes, however, it is a much harder case since we cannot use theordering on the y -coordinates of the right endpoints like for the Type B nodes. Instead wewill precompute an approximation of | F ( v ) ∩ S | for every internal node v in T and everysquare S ∈ S that lies entirely within Int ( v ). Recall from Section 3.1 that S is a set of size O ( n/ε ) that is guaranteed to have a square that is a (6 + ε/ S ( v ) denote the subset of squares in S that lie entirely within Int ( v ). The storedvalue for a square S ∈ S ( v ) is an upper bound on | F ( v ) ∩ S | and a lower bound on | F ( v ) ∩ S + | ,where S + is the same as the one defined for Type B nodes. See Figure 4(c). If S + intersectsthe left or right boundary of Int ( v ) then perform the query as a Type B node instead of aType C node.The associated data structure will be built on the set F ( v ), hence, all segments will spanthe interval Int ( v ). Let s , . . . , s m be the segments in F ( v ) and let µ ( s i ) be the angle ofinclination of s i . Partition F ( v ) into κ sets F ( v ) , . . . , F κ ( v ) such that for any segment s i ∈ F j ( v ) it holds that ( j − · π κ ≤ µ ( s i ) < j · π κ , with κ = 16 √ /ε .Using a combination of the approach we used for Type B nodes and a result by Agarwal [1]we can prove the following lemma. (cid:73) Lemma 16.
Given a set F j ( v ) = { s j , . . . , s jn } of line segments (as defined above) anda set S ( v ) = { S , . . . , S n } of squares lying entirely within Int ( v ) , one can compute, in O (( n + n /ε ) / polylog( n + n /ε )) time using O (( n + n /ε ) / / log (2 w +1) / ( n + n /ε )) space, where w is a constant < . , a set of n real values { M j ( v ) , . . . , M jn ( v ) } such thatfor every i , ≤ i ≤ n the following holds: | F j ( v ) ∩ S i | ≤ M ji ( v ) ≤ | F j ( v ) ∩ S + i | . Proof.
For each square S ∈ S ( v ) partition the left side and the bottom side of S into 2 κ subsegments of equal length, where κ = 16 /ε . Note that a line segment in F j ( v ) can atmost intersect one of the subsegments since the segments have positive slopes.Agarwal [1] showed that given a set of n r red line segments and another set of n b blueline segments, one can count, for each red segment, the number of blue segments intersectingit in overall time O ( n / log ( w +2) / n ) using O ( n / / log (2 w +1) / n ) space, where n = n r + n b and w is a constant < .
33. Note that there are faster algorithms, e.g. [10], but to the bestof the authors knowledge they do not return the number of red-blue intersection for each redsegment.Let the subsegments along the lower and left side of every square S ∈ S ( v ) be our red setof segments, hence n r = 2 κ · n . Let the n b = n segments in F j ( v ) be our blue segments.Applying the algorithm by Agarwal [1] immediately gives that in O (( n + κ · n ) polylog( n + udmundsson et al. 15 κ · n )) time we can compute the number of segments in F j ( v ) that intersect each subsegmentof the squares in S ( v ).For a subsegment ‘ of S i ∈ S ( v ), let a F j ( ‘ ) be the number of segments in F j ( v ) thatintersects ‘ and let b F J ( ‘ ) be the maximum length intersection between a possible segmentin F j ( v ) and S i . Set M ji ( ‘ ) = a F j ( ‘ ) · b F j ( ‘ ), which is an upper bound on the total lengthof intersection between the segments in F j intersecting ‘ of S i . To get an upper bound onthe total length of intersection between the segments in F j ( v ) and S i , denoted M ji ( v ), wesimply sum the M ji ( ‘ )-values for all subsegments ‘ of S i . This is performed for each square S i ∈ S ( v ), 1 ≤ i ≤ n .Using a similar analysis as in Lemma 14 we get | s ∩ S i | ≤ b F j ( ‘ ) ≤ | s ∩ S + i | , and | F j ( v ) ∩ S i | ≤ M ji ( v ) ≤ | F j ( v ) ∩ S + i | , which proves the lemma. (cid:74) For each set F j ( v ) apply Lemma 16, and for each square S i ∈ S ( v ) precompute the value M i = P κ j =1 M ji ( v ). Store all the squares lying within the interval Int ( v ) in a balanced binarysearch tree along with their precomputed values M i . Note that each square of S can appearat most once on each level of the primary segment tree structure T . Furthermore, an edge s ∈ π can straddle at most two intervals on a level, as a result we get that the total amountof time spent on one level of the segment tree to build the Type C associated data structuresis O (( n/ε ) / log ( w +2) / ( n/ε )). Since the number of levels in T is O (log n ) the total timerequired to build all the Type C associated data structures is O (( n/ε ) / log ( w +5) / ( n/ε )).Putting all the pieces together we get: (cid:73) Lemma 17.
The Type C associated data structures for T can be computed in time O (( n/ε ) / polylog( n/ε )) using O ( n / /ε ) space. Given a query square S ∈ S ( v ) , a value M ( v ) can be returned in O (log( n/ε )) time such that: | F ( v ) ∩ S | ≤ M ( v ) ≤ | F ( v ) ∩ S + | . In the previous section we showed how to construct a two-level data structure that usesa modified segment tree as the primary tree, and a set of associated data structures forall the internal nodes in the primary tree. The primary tree requires O ( n log n ) space,and the complexity of the associated data structures is dominated by the Type C nodes,that require O (( n/ε ) / log ( w +5) / ( n/ε )) time and O (( n/ε ) / / log (2 w +1) / ( n/ε )) space toconstruct. Given a query square S ∈ S a value M is returned in O ( ε · log n ) time suchthat Υ( S ) ≤ M ≤ Υ( S + ) . According to Lemma 12 there exists a square S ∈ S that has a packedness value that iswithin a factor of (6 + ε/
8) smaller than the maximum packedness value of π . Using thedata structure described in this section we get a (6 + ε/ ε/ ε isassumed to be at most 1, we finally get Theorem 2. In this paper we gave two approximation algorithm for the packedness value of a polygonalcurve. The obvious question is if one can get a fast and practical (1 + ε )-approximation.We also computed approximate packedness values for 16 real-world data sets, and theexperiments indicate that the notion of c -packedness is a useful realistic input model forcurves and trajectories. References Pankaj K. Agarwal. Partitioning arrangements of lines II: applications.
Discrete & Computa-tional Geometry , 5:533–573, 1990. doi:10.1007/BF02187809 . Pankaj K. Agarwal, Rolf Klein, Christian Knauer, Stefan Langerman, Pat Morin, MichaSharir, and Michael Soss. Computing the detour and spanning ratio of paths, trees, and cyclesin 2D and 3D.
Discrete & Computational Geometry , 39(1):17–37, 2008. H. Alt and M. Godau. Computing the Fréchet distance between two polygonal curves.
International Journal of Computational Geometry , 5:75–91, 1995. Helmut Alt, Christian Knauer, and Carola Wenk. Comparison of distance measures for planarcurves.
Algorithmica , 38(1):45–58, 2004. Boris Aronov, Sariel Har-Peled, Christian Knauer, Yusu Wang, and Carola Wenk. Fréchetdistance for curves, revisited. In
Proceedings of the European Symposium on Algorithms (ESA) ,pages 52–63, 2006. Karl Bringmann. Why walking the dog takes time: Frechet distance has no strongly sub-quadratic algorithms unless SETH fails. In , pages 661–670, 2014. Karl Bringmann and Marvin Künnemann. Improved approximation for Fréchet distance on c -packed curves matching conditional lower bounds. International Journal on ComputationalGeometry and Applications , 27(1-2):85–120, 2017. P. B. Callahan and S. R. Kosaraju. A decomposition of multidimensional point sets with ap-plications to k -nearest-neighbors and n -body potential fields. Journal of the ACM , 42(1):67–90,1995. Bernard Chazelle. Reporting and counting segment intersections.
Journal of Computer andSystem Sciences , 32(2):156–182, 1986. Bernard Chazelle. Cutting hyperplanes for divide-and-conquer.
Journal of Computer andSystem Sciences , 9:145–158, 1993. Daniel Chen, Anne Driemel, Leonidas J. Guibas, Andy Nguyen, and Carola Wenk. Approximatemap matching with respect to the Fréchet distance. In
Proceedings of the 13th Workshop onAlgorithm Engineering and Experiments (ALENEX) , pages 75–83, 2011. P. C. Cross, D. M. Heisey, J. A. Bowers, C. T. Hay, J. Wolhuter, P. Buss, M. Hofmeyr, A. L.Michel, R. G. Bengis, T. L. F. Bird, , et al. Disease, predation and demography: assessing theimpacts of bovine tuberculosis on african buffalo by monitoring at individual and populationlevels.
Journal of Applied Ecology , 46(2):467–475, 2009. R. Silverman D. Mount and A. Wu. On the area of overlap of translated polygons.
ComputerVision and Image Understanding , 64(1):53–61, 1996. M. de Berg, O. Cheong, M. J. van Kreveld, and M. H. Overmars.
Computational geometry:algorithms and applications, 3rd Edition . Springer, 2008. Mark de Berg. Linear size binary space partitions for uncluttered scenes.
Algorithmica ,28(3):353–366, 2000. Mark de Berg, Frank van der Stappen, Jules Vleugels, and Matya Katz. Realistic input modelsfor geometric algorithms.
Algorithmica , 34(1):81–97, 2002. Anne Driemel and Sariel Har-Peled. Jaywalking your dog: Computing the Fréchet distancewith shortcuts.
SIAM Journal on Computing , 42(5):1830–1866, 2013. udmundsson et al. 17 Anne Driemel, Sariel Har-Peled, and Carola Wenk. Approximating the Fréchet distance forrealistic curves in near linear time.
Discrete & Computational Geometry , 48(1):94–127, 2012. Anne Driemel and Amer Krivosija. Probabilistic embeddings of the Fréchet distance. In
Proceedings of the 16th International Workshop Approximation and Online Algorithms , pages218–237, 2018. Jeff Erickson. On the relative complexities of some geometric problems. In
Proceedings ofthe 7th Canadian Conference on Computational Geometry , pages 85–90. Carleton University,Ottawa, Canada, 1995. URL: . Elias Frentzos, Kostas Gratsias, Nikos Pelekis, and Yannis Theodoridis. Nearest neighborsearch on moving object trajectories. In
SSTD , pages 328–345. Springer, 2005. M. Maurice Fréchet. Sur quelques points du calcul fonctionnel.
Rendiconti del CircoloMatematico di Palermo , 22:1–72, 1906. doi:10.1007/BF03018603 . Anna Gagliardo, Enrica Pollonara, and Martin Wikelski. Pigeon navigation: exposure toenvironmental odours prior release is sufficient for homeward orientation, but not for homing.
Journal of Experimental Biology , pages jeb–140889, 2016. Luca Giuggioli, Thomas J McKetterick, and Marc Holderied. Delayed response and biosonarperception explain movement coordination in trawling bats.
PLoS computational biology ,11(3):e1004089, 2015. J. Gudmundsson, M. J. van Kreveld, and F. Staals. Algorithms for hotspot computation ontrajectory data. In , pages 134–143. ACM, 2013. Joachim Gudmundsson, Michael Horton, John Pfeifer, and Martin Seybold. A practical indexstructure supporting Fréchet proximity queries among trajectories. arXiv:2005.13773. Joachim Gudmundsson and Michiel H. M. Smid. Fast algorithms for approximate Fréchetmatching queries in geometric trees.
Computational Geometry , 48(6):479–494, 2015. Sariel Har-Peled and Benjamin Raichel. The Fréchet distance revisited and extended.
ACMTransactions on Algorithms , 10(1), 2014. W. Meulemans K. Buchin, M. Buchin and W. Mulzer. Four soviets walk the dog – with anapplication to alt’s conjecture.
Discrete & Computational Geometry , 58(1):180–216, 2017. Roland Kays, James Flowers, and Suzanne Kennedy-Stoskopf. Cat tracker project. , 2016. Huanhuan Li, Jingxian Liu, Ryan Wen Liu, Naixue Xiong, Kefeng Wu, and Tai-hoon Kim.A dimensionality reduction-based multi-step clustering method for robust vessel trajectoryanalysis.
Sensors , 17(8):1792, 2017. Microsoft. Microsoft research asia, GeoLife GPS trajectories. , 2012. Joseph S. B. Mitchell, David M. Mount, and Subhash Suri. Query-sensitive ray shooting.
International Journal on Computational Geometry and Applications , 7(4):317–347, 1997. NOAA. National hurricane center, national oceanic and atmospheric administration,HURDAT2 atlantic hurricane database. , 2017. Mark H. Overmars and A. Frank van der Stappen. Range searching and point location amongfat objects.
Journal of Algorithms , 21(3):629–656, 1996. Caroline L Poli, Autumn-Lynn Harrison, Adriana Vallarino, Patrick D Gerard, and Patrick GRJodice. Dynamic oceanography determines fine scale foraging behavior of masked boobies inthe gulf of mexico.
PloS one , 12(6):e0178318, 2017. Rajiv Shah and Rob Romijnders. Applying deep learning to basketball trajectories. arXivpreprint arXiv:1608.03793 , 2016. STATS. STATS LLC - data science. , 2015. Frank van der Stappen and Mark H. Overmars. Motion planning amidst fat obstacles (extendedabstract). In
Proceedings of the 10th Annual Symposium on Computational Geometry , pages31–40, 1994. Frank van der Stappen, Mark H. Overmars, Mark de Berg, and Jules Vleugels. Motionplanning in environments with low obstacle density.
Discrete & Computational Geometry ,20(4):561–587, 1998. Antoine Vigneron. Geometric optimization and sums of algebraic functions.
ACM Trans.Algorithms , 10(1):4:1–4:20, 2014. URL: https://doi.org/10.1145/2532647 , doi:10.1145/2532647 . Martin Wikelski, Elena Arriero, Anna Gagliardo, Richard A Holland, Markku J Huttunen,Risto Juvaste, Inge Mueller, Grigori Tertitski, Kasper Thorup, Martin Wild, et al. Truenavigation in migrating gulls requires intact olfactory nerves.
Scientific reports , 5:17061, 2015. Ben H Williams, Marc Toussaint, and Amos J Storkey. Extracting motion primitives fromnatural handwriting data. In
ICANN , pages 634–643. Springer, 2006. Jing Yuan, Yu Zheng, Xing Xie, and Guangzhong Sun. Driving with knowledge from thephysical world. In
Proc. of the 17th ACM SIGKDD Conf. , pages 316–324. ACM, 2011. Jing Yuan, Yu Zheng, Chengyang Zhang, Wenlei Xie, Xing Xie, Guangzhong Sun, and YanHuang. T-drive: driving directions based on taxi trajectories. In