Translating Hausdorff is Hard: Fine-Grained Lower Bounds for Hausdorff Distance Under Translation
TTranslating Hausdorff is Hard: Fine-Grained LowerBounds for Hausdorff Distance Under Translation
Karl Bringmann
Saarland University and Max Planck Insitute for Informatics, Saarland Informatics Campus,Saarbrücken, [email protected]
André Nusser
Saarbrücken Graduate School of Computer Science and Max Planck Insitute for Informatics,Saarland Informatics Campus, Saarbrücken, [email protected]
Abstract
Computing the similarity of two point sets is an ubiquitous task in medical imaging, geometric shapecomparison, trajectory analysis, and many more settings. Arguably the most basic distance measurefor this task is the Hausdorff distance, which assigns to each point from one set the closest point inthe other set and then evaluates the maximum distance of any assigned pair. A drawback is thatthis distance measure is not translational invariant, that is, comparing two objects just according totheir shape while disregarding their position in space is impossible.Fortunately, there is a canonical translational invariant version, the Hausdorff distance undertranslation, which minimizes the Hausdorff distance over all translations of one of the point sets.For point sets of size n and m , the Hausdorff distance under translation can be computed in time˜ O ( nm ) for the L and L ∞ norm [Chew, Kedem SWAT’92] and ˜ O ( nm ( n + m )) for the L norm[Huttenlocher, Kedem, Sharir DCG’93].As these bounds have not been improved for over 25 years, in this paper we approach theHausdorff distance under translation from the perspective of fine-grained complexity theory. Weshow (i) a matching lower bound of ( nm ) − o (1) for L and L ∞ assuming the Orthogonal VectorsHypothesis and (ii) a matching lower bound of n − o (1) for L in the imbalanced case of m = O (1)assuming the 3SUM Hypothesis. Theory of computation → Problems, reductions and completeness
Keywords and phrases
Hausdorff Distance Under Translation, Fine-Grained Complexity Theory,Lower Bounds
Funding
Karl Bringmann : This work is part of the project TIPEA that has received funding fromthe European Research Council (ERC) under the European Unions Horizon 2020 research andinnovation programme (grant agreement No. 850979).
As data sets become larger and larger, the requirement for faster algorithms to handle suchamounts of data becomes increasingly necessary. One very common type of data that iscreated during measurements is point sets in the plane, for example when recording GPStrajectories or describing shapes of objects, in medical image analysis, and in various datascience applications.A fundamental algorithmic tool for analyzing point sets is to compute the similarity oftwo given sets of points. There are several different measures of similarity in this setting, forexample Hausdorff distance [20], geometric bottleneck matching [17], Fréchet distance [3],and Dynamic Time Warping [24]. Among these measures, the Hausdorff distance is arguablythe most basic and intuitive: It assigns to each point from one set the closest point in the a r X i v : . [ c s . C G ] M a r Fine-Grained Lower Bounds for Hausdorff Distance Under Translation other set and then evaluates the maximum distance of all assigned pairs of points. For adiscussion of the other previously mentioned distance measures, see Section 1.1.While these similarity measures are of great practical relevance, for some applicationsit is a drawback that they are not translational invariant, i.e., when translating a pointset the distance can – and in most cases will – change. This is unfavorable in applicationsthat ask for comparing the shape of two objects, meaning that the absolute position of anobject is irrelevant. Examples of this task arise for example in 2D object shape similarity,medical image analysis [18], classification of handwritten characters [10], movement patternsof animals [12], and sports analysis [16].Fortunately, any similarity measure has a canonical translational invariant version, byminimizing the similarity measure over all translations of the two given point sets. For theHausdorff distance this variant is known as the
Hausdorff distance under translation , seeSection 2 for a formal definition. Given two point sets in the plane of size n and m , theHausdorff distance under translation can be computed in time O ( nm log nm ) for the L and L ∞ norm [15], and in time O ( nm ( n + m ) log nm ) for the L norm [21]. We are not awareof any lower bounds for this problem, not even conditional on a plausible hypothesis. Theonly results in this direction are Ω( n ) lower bounds on the arrangement size [15] and on thenumber of connected components of the feasible translations [27] (for the decision problemon points in the plane with n = m ). However, these bounds also hold for L and L ∞ , wherethey are “broken” by the O ( nm log nm )-time algorithm [15], so apparently these boundsare irrelevant for the running time complexity.In this paper, we approach the Hausdorff distance under translation from the viewpointof fine-grained complexity theory [28]. For two problem settings, we show that the knownalgorithms are optimal up to lower order factors assuming standard hypotheses: We show an ( nm ) − o (1) lower bound for L and L ∞ , matching the O ( nm log nm )-timealgorithm from [15] up to lower order factors.This results holds conditional on the Orthogonal Vectors Hypothesis, which states thatfinding two orthogonal vectors among two given sets of n binary vectors in d dimensionscannot be done in time O ( n − ε poly( d )) for any ε >
0. It is well-known that the OrthogonalVectors Hypothesis is implied by the Strong Exponential Time Hypothesis [29], and thusour lower bound also holds assuming the latter [22]. These two hypotheses are the moststandard assumptions used in fine-grained complexity theory in the last decade [28]. We show an n − o (1) lower bound for L in the imbalanced case m = O (1), matching the O ( nm ( n + m ) log nm )-time algorithm from [15] up to lower order factors. Previously,an n − o (1) lower bound was only known for the more general problem of computing theHausdorff distance under translation of sets of segments in the case that both sets havesize n (a problem for which the best known algorithm runs in time ˜ O ( n )) [6].Our result holds conditional on the 3SUM Hypothesis, which states that deciding whetheramong n given integers there are three that sum to 0 requires time n − o (1) . Thishypothesis was introduced by Gajentaan and Overmars [19], is a standard assumptionin computational geometry [23], and has also found a wealth of applications beyondgeometry (see, e.g., [25, 4, 2, 1]). There is a directed and an undirected variant of the Hausdorff distance, see Section 2. In this introduction,we do not differentiate between these two, since all our statements hold for both variants. By ˜ O -notation we ignore logarithmic factors in n and m . . Bringmann, A. Nusser 3 Our lower bounds close gaps that have not seen any progress over 25 years. Furthermore,note that our second lower bound shows a separation between the L norm and the L and L ∞ norms, as in the imbalanced case m = O (1) the former admits a ˜ O ( n )-time algorithm [15]while the latter requires time n − o (1) assuming the 3SUM Hypothesis. We leave it as anopen problem whether for L the balanced case n = m requires time n − o (1) . Our work continues a line of research on fine-grained lower bounds in computational geometry,which had early success with the 3SUM Hypothesis [19] and recently got a new impulsewith the Orthogonal Vectors Hypothesis (or Strong Exponential Time Hypothesis) andresulting lower bounds for the Fréchet distance [7], see also [13, 11]. Continuing this lineof research is getting increasingly difficult, although there are still many classic problemsfrom computational geometry without matching lower bounds. In this paper we obtain suchbounds for two settings of the classic Hausdorff distance under translation.Besides Hausdorff distance, there are several other distance measures on point sets,including geometric bottleneck matching [17], Fréchet distance [3], and Dynamic TimeWarping [24]. The geometric bottleneck matching minimizes the maximal distance in aperfect matching between the two given point sets. Fréchet distance and Dynamic TimeWarping additionally take the order of the input points into account. They both consider thesame class of traversals of the input points, and the Fréchet distance minimizes the maximal distance that occurs during the traversal, while Dynamic Time Warping minimizes the sum of distances.Let us discuss the canonical translational invariant versions of these distance measures. Forgeometric bottleneck matching under translation, Efrat et al. designed an ˜ O ( n ) algorithm [17].The discrete Fréchet distance under translation has an ˜ O ( n . ... )-time algorithm and aconditional lower bound of n − o (1) [9], see also [10] for algorithm engineering work on thistopic. While Dynamic Time Warping is a very popular measure (in particular for video andspeech processing), its canonical translational invariant version cannot be computed exactlyin L since it contains the geometric median problem as a special case [5].Further work on the Hausdorff distance under translation includes an O (( n + m ) log nm )-time algorithm for point sets in one dimension [26]. For generalizations to dimension d > In this paper, we mostly consider finite point sets which lie in R . For any p ∈ R , we use p x and p y to refer to its first and second component, respectively. For a point set A ⊂ R and atranslation τ ∈ R , we define A + τ := { a + τ | a ∈ A } . To denote index sets, we often use[ n ] := { , . . . , n } . Given a point x ∈ R , its p -norm is defined as k x k p := X i ∈ [ d ] | x i | p p . We now introduce several distance measures, which are all versions of the famous Hausdorffdistance. First, let us define the most basic version. Let
A, B ⊂ R be two point sets. The directed Hausdorff distance is defined as δ ~H ( A, B ) := max a ∈ A min b ∈ B k a − b k p . Fine-Grained Lower Bounds for Hausdorff Distance Under Translation
Note that, intuitively, the directed Hausdorff distance measures the distance from A to B butnot from B to A , and it is not symmetric. A symmetric variant of the Hausdorff distance,the undirected Hausdorff distance , is defined as δ H ( A, B ) := max { δ ~H ( A, B ) , δ ~H ( B, A ) } . Note that, by definition, δ ~H ( A, B ) ≤ δ H ( A, B ). Both of the above distance measures can bemodified to a version which is invariant under translation. The directed Hausdorff distanceunder translation is defined as δ T~H ( A, B ) := min τ ∈ R δ ~H ( A, B + τ ) , and the undirected Hausdorff distance under translation is defined as δ TH ( A, B ) := min τ ∈ R δ H ( A, B + τ ) . Again, it holds that δ T~H ( A, B ) ≤ δ TH ( A, B ). Naturally, for all of the above distance measures,the decision problem is defined such that we are given two point sets
A, B and a thresholddistance δ , and ask if the distance of A, B is at most δ .For the Hausdorff distance (without translation) the undirected distance is at most ashard as the directed distance, because the undirected distance can be calculated using twocalls to an algorithm computing the directed distance. However, note that for the Hausdorffdistance under translation, we cannot just compute the directed distance twice and thenobtain the undirected distance as we have to take the maximum for the same translation. ( mn ) − o (1) lower bound for L and L ∞ We now present a conditional lower bound of ( mn ) − o (1) for the Hausdorff distance undertranslation for L and L ∞ . For simplicity, we present the lower bound for the L case. Thisconstruction is equivalent to the L ∞ case, via a rotation by π . Our lower bound is based onthe hypothesized hardness of the Orthogonal Vectors problem. (cid:73) Definition 1 (Orthogonal Vectors Problem (OV)) . Given two sets
X, Y ⊂ { , } d with | X | = m, | Y | = n , decide whether there exist x ∈ X and y ∈ Y with h x, y i = 0 . A popular hypothesis from fine-grained complexity theory is as follows. (cid:73)
Definition 2 (Orthogonal Vectors Hypothesis (OVH)) . The Orthogonal Vectors problemcannot be solved in time O (( nm ) − (cid:15) poly ( d )) for any (cid:15) > . This hypothesis is typically stated and used for the balanced case n = m . However, it isknown that the hypothesis for the balanced case is equivalent to the hypothesis for anyunbalanced case n = m α for any fixed constant α >
0, see, e.g, [8, Lemma 5.1 in Arxivversion].We now describe a reduction from Orthogonal Vectors to Hausdorff distance undertranslation. To this end, we are given two sets of d -dimensional binary vectors X = { x , . . . , x m } and Y = { y , . . . , y n } with | X | = m and | Y | = n , and we construct an instanceof the undirected Hausdorff distance under translation defined by point sets A and B and a Actually, the directed Hausdorff distance is also at most as hard as the undirected Hausdorff distance(thus, they are equally hard), as δ ~H ( A, B ) = δ H ( A ∪ B, B ). . Bringmann, A. Nusser 5 (cid:15) A : B : Figure 1
Sketch of the reduction from OV to the undirected Hausdorff distance under translation.The microtranslations in the order of (cid:15) are not shown in this sketch. decision distance δ = 1. First, we describe the high-level structure of our reduction. Thepoint set A consists only of Vector Gadgets, which encode the vectors of X using 2 md points.The point set B consists of three types of gadgets: Vector Gadgets:
They encode the vectors from Y , very similar to the Vector Gadgets of A . Translation Gadget:
It restricts the possible translations of the point set B . Undirected Gadget:
It makes our reduction work for the undirected Hausdorff distanceunder translation by ensuring that the maximum over the directed Hausdorff distances isalways attained by δ ~H ( B + τ, A ).Intuitively, the two dimensions of the translation choose the vectors x ∈ X and y ∈ Y by aligning a Vector Gadget from A with a Vector Gadget from B in a certain way. Analignment of distance at most 1 is only possible if x and y are orthogonal. See Figure 1 foran overview of the reduction. We now describe the gadgets in detail. Let (cid:15) > (cid:15) = mnd . Recall that the distance for which we want to solve the decision problem is δ = 1. Furthermore, we denote the i th component of a vector v by v [ i ]. Vector Gadget
We define a general Vector Gadget, which we then use at several places by translating it.Given a vector v ∈ { , } d , the Vector Gadget consists of the points p , . . . , p d ∈ R : p i = ( ( (cid:15) , i(cid:15) ) , if v [ i ] = 0(0 , i(cid:15) ) , if v [ i ] = 1We denote the Vector Gadget created from vector v by V ( v ). Additionally, we define amirrored version of the gadget V ( v ), defined as V := V (¯ v ) , where ¯ v is the inversion of v , i.e., each bit is flipped. Fine-Grained Lower Bounds for Hausdorff Distance Under Translation V ((1 , , , , , , V ((0 , , , , , , p d p q d q (cid:15) (cid:15) Figure 2
A depiction of the two types of Vector Gadgets and how they are placed to check fororthogonality. (cid:73)
Lemma 3.
Given two vectors v , v ∈ { , } d and corresponding Vector Gadgets V = V ( v ) and V = V ( v ) + (1 , , δ H ( V , V ) ≤ if and only if v · v = 0 . Proof.
Let the points of V (resp. V ) be denoted as p , . . . , p d (resp. q , . . . , q d ). First, notethat k p i − q j k = 1 + | i − j | (cid:15) + ( v [ i ] + v [ j ] − (cid:15) > i = j . Thus, for the Hausdorffdistance to be at most 1, we have to match p i to q i for all i ∈ [ d ]. This is possible if and onlyif v [ i ] = 0 or v [ i ] = 0, as p i and q i are only far for v [ i ] = 1 and v [ i ] = 1. (cid:74) See Figure 2 for an example. Note that if we swap both gadgets and invert both vectors (i.e.,flip all their bits), the Hausdorff distance does not change and thus an analogous version ofLemma 3 holds in this case, as we are just performing a double inversion. (cid:73)
Lemma 4.
Given two vectors v , v ∈ { , } d and corresponding Vector Gadgets V = V ( v ) and V = V ( v )+(1 , , δ H ( V , V ) ≤ if and only if ¯ v · ¯ v = 0 , where ¯ v , ¯ v are the inversionsof v , v . For two Vector Gadgets V = V ( v ) + ( x, y ) and V = V ( v ) + ( x + D, y ), we say that V and V are vertically aligned , or more precisely vertically aligned in distance D . Translation Gadget
To ensure that B cannot be translated arbitrarily, we introduce a gadget to restrict thetranslations to the regime we require. The Translation Gadget T consists of two translatedVector Gadgets of the zero vector: T := ( V (1 d ) − (2 + n(cid:15), ∪ ( V (0 d ) + (2 + 2 (cid:15), (cid:73) Lemma 5.
Let P ⊂ [ − − (cid:15), (cid:15) ] × R be a point set. If δ T~H ( T, P ) ≤ , then τ ∗ x ∈ [ − ( n + ) (cid:15) − (cid:15) , − (cid:15) ] , where τ ∗ is any translation satisfying δ ~H ( T, P + τ ∗ ) ≤ . Proof.
We show the contrapositive. Therefore, assume the converse, i.e., that τ ∗ x / ∈ [ − ( n + ) (cid:15) − (cid:15) , − (cid:15) ]. If τ ∗ x < − ( n + ) (cid:15) − (cid:15) , then − − (cid:15) − ( − n(cid:15) + (cid:15) + τ ∗ x ) > T cannot contain any point of P in distance 1. If τ ∗ x > − (cid:15) , then2 + 2 (cid:15) + τ ∗ x − (1 + (cid:15) ) > T cannot contain any point of P indistance 1. Thus, δ T~H ( T, P ) > (cid:74) . Bringmann, A. Nusser 7Undirected Gadget To ensure that each point in A can be matched to a point in B with distance at most 1, weadd auxiliary points to B . The Undirected Gadget is defined by the point set U := { ( − , , ( 12 , } . (cid:73) Lemma 6.
Given a set of points P ⊂ [ − − (cid:15), (cid:15) ] × [ − , ] , it holds that δ ~H ( P, U + τ ) ≤ for any τ ∈ [ − ( n + ) (cid:15) − (cid:15) , ( n + ) (cid:15) + (cid:15) ] × [ − , ] . Proof.
By symmetry, we can restrict to proving that the distance of the point set P = P ∩ [0 , ( n + 12 ) (cid:15) + (cid:15) ] × [ − ,
18 ]to ( ,
0) + τ is at most 1. For any p ∈ P , we have | p x − ( + τ x ) | ≤ + O ( n(cid:15) ) and | p y − τ y | ≤ . Thus, k p − (( ,
0) + τ ) k = + O ( n(cid:15) ), which is less than 1 for small enough (cid:15) . (cid:74) We now describe the reduction and prove its correctness. We construct the point sets of ourHausdorff distance under translation instance as follows. The first set, i.e., set A , consistsonly of Vector Gadgets: A := [ i ∈ [ m ] V ( x i ) + ( − − (cid:15), i · d(cid:15) ) ∪ [ i ∈ [ m ] V (1 d ) + (1 + 12 (cid:15), i · d(cid:15) ) The second set, i.e., set B , consists of Vector Gadgets, the Translation Gadget, and theUndirected Gadget: B := [ j ∈ [ n ] V ( y j ) + ( j(cid:15), ∪ T ∪ U See Figure 1 for a sketch of the above construction. To reference the vector gadgets as theyare used in the reduction, we use the notation V r ( x i ) := V ( x i ) + ( − − (cid:15), i · d(cid:15) ) and V r ( y j ) := V ( y j ) + ( j(cid:15), . We can now prove correctness of our reduction. In the reduction, we return some canonicalpositive instance, if the 0 d vector is contained in any of the two OV sets. This allows us todrop all 1 d vectors from the input, as they cannot be orthogonal to any other vector. Thus,we can assume that all vectors in our input contain at least one 0-entry and at least one1-entry. (cid:73) Theorem 7.
Computing the directed or undirected Hausdorff distance under translation in L or L ∞ for two sets of size n and m cannot be solved in time O (( mn ) − γ ) for any γ > ,unless the Orthogonal Vectors Hypothesis fails. Proof.
Recall that we only have to consider the L case. We first prove that there is a pairof orthogonal vectors x ∈ X and y ∈ Y if and only if δ TH ( A, B ) ≤ Fine-Grained Lower Bounds for Hausdorff Distance Under Translation ⇒ : Assume that there exist x i ∈ X , y j ∈ Y and h x i , y j i = 0. Then consider the translation τ = ( − ( j + ) (cid:15), i · d(cid:15) ) which vertically aligns the Vector Gadgets V r ( x i ) and V r ( y j ) + τ indistance 1. As x i and y j are orthogonal, it follows from Lemma 3 that δ ~H ( V r ( y j )+ τ, A ) ≤
1. It remains to show that all remaining points of B + τ have a point in distance atmost 1. The Vector Gadgets in B + τ which correspond to y j with j < j are strictlyto the left of V r ( y j ) + τ and are thus also in Hausdorff distance at most 1 from V r ( x i ).If j = n , then we are done with the Vector Gadgets. Otherwise, consider the VectorGadget V r ( y j +1 ) + τ . We claim that each point of it is in distance at most 1 from V (1 d ) + (1 + (cid:15), i · d(cid:15) ). As the two gadgets are vertically aligned, we just have to checktheir horizontal distance, which is1 + 12 (cid:15) − (( j + 1) (cid:15) − ( j + 12 ) (cid:15) ) = 1 . Thus, by Lemma 3, we have δ ~H ( V r ( y j +1 ) + τ, A ) ≤
1. Now, by the same argument asabove, all gadgets V r ( y j ) + τ with j > j + 1 are in directed Hausdorff distance at most1 from A .As the points of the Undirected Gadget U + τ are closer by a distance of almost to A than the Vector Gadgets in B + τ , also δ ~H ( U + τ, A ) ≤ T + τ is in distance at most 1 from A . As the left part of T and V r ( x i ) are aligned vertically, we only have to check the horizontal distance. Thehorizontal distance is − − (cid:15) − ( − n(cid:15) − ( j + 12 ) (cid:15) ) = 1 − ( n − j ) (cid:15) ≤ j ∈ [ n ]. Similarly, the distance of the right part of the Translation Gadget fromthe vertically aligned V (1 d ) in A is2 + 2 (cid:15) − ( j + 12 ) (cid:15) − (1 + 12 (cid:15) ) = 1 − ( j − (cid:15) ≤ j ∈ [ n ]. Thus, by Lemma 3 and Lemma 4, it holds that δ ~H ( T + τ, A ) ≤
1. As τ ∈ [ − ( n + ) (cid:15) − (cid:15) , − (cid:15) ] × [ − , ], we know by Lemma 6 that δ ~H ( A, B + τ ) ≤ δ TH ( A, B ) ≤ ⇐ : Now, assume that δ TH ( A, B ) ≤ τ be any translation for which δ ~H ( B + τ, A ) ≤ τ x ∈ [ − ( n + ) (cid:15) − (cid:15) , − (cid:15) ].Let V r ( y j ) + τ, V r ( y j +1 ) + τ be the Vector Gadgets such that V r ( y j ) + τ has directedHausdorff distance at most 1 to the left Vector Gadgets of A and V r ( y j +1 )+ τ has directedHausdorff distance at most 1 to the right Vector Gadgets of A . This is well-defined asthe left Vector Gadgets of A and the right Vector Gadgets of A are in distance at least2 + (cid:15) − (cid:15) from each other, and thus no Vector Gadget of B + τ can be in distance at most1 from both sides. Furthermore, as τ x ≤ − (cid:15) , there has to be a Vector Gadget V r ( y j ) + τ that has directed Hausdorff distance at most 1 to the left Vector Gadgets of A , as j(cid:15) − (cid:15) − ( − − (cid:15) ) = 1 + ( j − (cid:15) ≤ j = 1. If j = n , then V r ( y j +1 ) + τ is undefined.As δ ~H ( B + τ, A ) ≤
1, we know that V r ( y j ) + τ has directed Hausdorff distance at most 1to a gadget V r ( x ) for some x ∈ X . We claim that this distance cannot be closer than 1as V r ( y j +1 ) + τ must have a directed Hausdorff distance at most 1 from the right side of . Bringmann, A. Nusser 9 A or, in case j = n , due to the restrictions imposed by the Translation Gadget. Let usconsider the case j = n first. Any translation τ which places V r ( y j +1 ) + τ in directedHausdorff distance at most 1 from the right side of A needs to fulfill1 + 12 (cid:15) − (( j + 1) (cid:15) + τ ) ≤ τ ≥ − ( j + ) (cid:15) , using the fact that each vector in Y contains at least one 0-entry.This, on the other hand, implies that V r ( y j ) + τ is in Hausdorff distance at least j(cid:15) − ( j + 12 ) (cid:15) − ( − − (cid:15) ) = 1from V r ( x ). Now consider the case j = n . As by Lemma 5 we have τ x ≥ − ( n + ) (cid:15) − (cid:15) ,it follows that V r ( y n ) + τ is in Hausdorff distance at least n(cid:15) − ( n + 12 ) (cid:15) − ( − − (cid:15) ) = 1from V r ( x ), using the fact that each vector in Y contains at least one 0-entry (this is thereason why the (cid:15) disappears).By the arguments above, the two gadgets V r ( y j ) + τ and V r ( x ) have to be horizontallyaligned as required by Lemma 3. They also have to be vertically aligned as a verticaldeviation would incur a Hausdorff distance larger than 1 for the pair of points in the twogadgets that are in horizontal distance 1. Then, applying Lemma 3, it follows that x an y j are orthogonal.It remains to argue why the above reduction implies the lower bound stated in the theorem.Assume we have an algorithm that computes the Hausdorff distance under translation for L or L ∞ in time ( mn ) − γ for some γ >
0. Then, given an Orthogonal Vectors instance
X, Y with | X | = m and | Y | = n , we can use the described reduction to obtain an equivalent Hausdorffunder translation instance with point sets A, B of size | A | = O ( md ) and | B | = O ( nd ) andsolve it in time O (( mn ) − γ poly( d )), contradicting the Orthogonal Vectors Hypothesis. (cid:74) L p We believe that we can extend the above construction such that it works for all L p normswith p = ∞ by changing the spacing between 0 and 1 points of the Vector Gadgets and alsoset (cid:15) accordingly. More precisely, it seems that we can use (cid:15) p as spacing and set (cid:15) < pmnd .The proofs should then be analogous to the L case. n − o (1) lower bound for m ∈ O (1) We now present a hardness result for the unbalanced case of the directed and undirectedHausdorff distance under translation. We base our hardness on another popular hypothesisof fined-grained complexity theory: the
Hypothesis. Before stating the hypothesis, letus first introduce the problem. (cid:73)
Definition 8 ( ) . Given three sets of positive integers
X, Y, Z all of size n , do thereexist x ∈ X, y ∈ Y, z ∈ Z such that x + y = z ? The corresponding hardness assumption is the
Hypothesis. (cid:15) (cid:15) (cid:15) (cid:15) y = 0 p p p p p q x (cid:15) . q x (cid:15) . Figure 3
The A set of the low-level gadget of the reduction, which is used to build thehigh-level gadgets. We just show the most right part of the gadget, but the remainder is similar. (cid:73) Definition 9 ( Hypothesis) . There is no O ( n − (cid:15) ) algorithm for for any (cid:15) > . There are several equivalent variants of the problem. Most important for us is theconvolution problem, abbreviated as
Conv3Sum [25, 14]. (cid:73)
Definition 10 (Conv3SUM) . Given a sequence of positive integers X = ( x , . . . , x n ) of size n , do there exist i, j such that x i + x j = x i + j ? This problem has a trivial O ( n ) algorithm and, assuming the Hypothesis, this is alsooptimal up to lower order factors. As and
Conv3Sum are equivalent, a lower boundconditional on
Conv3Sum implies a lower bound conditional on .Therefore, given a
Conv3Sum instance defined by the set of integers X with | X | = n ,we create an equivalent instance of the directed Hausdorff distance under translation for L by constructing two sets of points A and B with | A | = O ( n ) and | B | = O (1) and providinga decision distance δ . Intuitively, we define a low-level gadget from which we build threehigh-level gadgets by rotation and scaling. Recall that in the Conv3Sum problem we haveto find values i, j which fulfill the equation x i + x j = x i + j . We encode the choice of thesetwo values into the two dimensions of the translation. These three high-level gadgets thenverify if the Conv3Sum equation is fulfilled. In the remainder of this section, we present thedetails of our reduction and prove that it implies the claimed lower bound.
Given an integer
Conv3Sum instance with X ⊂ [ M ] where n = | X | , we now describethe construction of the Hausdorff distance under translation instance with point sets A, B and threshold distance δ . We use a small enough (cid:15) , e.g., (cid:15) = (4 M n ) − , as value formicrotranslations. Furthermore, we set δ = 1+4 n (cid:15) . The additional 4 n (cid:15) term compensatesfor the small variations in distance that occur on microtranslations due to the curvature ofthe L -ball. We use a single low-level gadget, which is then scaled and rotated to obtain high-level gadgets.This gadget consists of two point sets A l and B l . The point set A l contains what we call number points p i , p i and filling points q i for 0 ≤ i < n . The set B l just contains two points: r and r . The number points p i , p i encode the number x i , while the filling points make surethat no other translations than the desired ones are possible. See Figure 3 for an overview.All of the points in this gadget are of the form ( x, p i = (cid:0) i(cid:15) + x i (cid:15) . , (cid:1) , p i = p i + ( (cid:15), . Bringmann, A. Nusser 11 for 0 ≤ i < n . The filling points are q i = (cid:18)(cid:18) i + 32 (cid:19) (cid:15), (cid:19) for 0 ≤ i < n .The points in B l should introduce a gap to only allow alignment of the number gadgetssuch that the microtranslations (i.e., those in the order of (cid:15) . ) correspond to the number ofthe gap in the number gadget. Thus B l contains the points r = ( − , , r = (1 + (cid:15), . Before we prove properties of the low-level gadget, we first prove that the error that ishappening due to the curvature of the L -ball is small. (cid:73) Lemma 11.
Let ( p x , p y ) , ( q x , q y ) ∈ R be two points with | p x − q x | ∈ [ , and p y = q y .For any τ ∈ [0 , (2 n − (cid:15) ] , we have | p x − ( q x + τ x ) | ≤ k p − ( q + τ ) k ≤ | p x − ( q x + τ x ) | + 4 n (cid:15) . Proof.
As each component is a lower bound to the L norm, the first inequality follows.Thus, let us prove the second inequality. We first transform k p − ( q + τ ) k = q ( p x − ( q x + τ x )) + τ y = | p x − ( q x − τ x ) | q τ y / ( p x − ( q x + τ x )) . Because √ x ≤ x for any x ≥
0, we have k p − ( q + τ ) k ≤ | p x − ( q x − τ x ) | + τ y / (2 | p x − ( q x − τ x ) | ) . As τ y ≤ n − (cid:15) and | p x − ( q x − τ x ) | ≥ , we obtain the desired upper bound. (cid:74) An analogous statement holds when swapping the x and y coordinates. Note that the 4 n (cid:15) term also occurs in the value of δ that we chose, as this is how we compensate for theseerrors in our construction. While we have to consider this error in the following arguments,it already seems that it will be insignificant due to its magnitude.We now state two lemmas which show how the Hausdorff distance under translationdecision problem is related to the structure of the low-level gadget. (cid:73) Lemma 12.
Given a low-level gadget A l , B l as constructed above and the translation beingrestricted to τ ∈ [0 , (2 n − (cid:15) ] , it holds that if δ ~H ( A l , B l + τ ) ≤ δ , then ∃ i ∈ N : τ x = 2 i(cid:15) + x i (cid:15) . ± n (cid:15) . Proof.
Let τ ∈ [0 , (2 n − (cid:15) ] and assume δ ~H ( A l , B l + τ ) ≤ δ . Then all points in A l are indistance at most δ from one of the two points in B l . Furthermore, both points in B l + τ alsohave at least one close point in A l , as k r + τ − p k ≤ − τ x +4 n (cid:15) ≤ δ and k r + τ − q n − k ≤ τ x − (2 n −
12 ) (cid:15) +4 n (cid:15) < δ, using Lemma 11.The gaps between neighboring points in A l either have width close to (cid:15) , if the gap isbetween a number point and a filling point ( p i and q i − , or p i and q i ), or they have a widthof (cid:15) , if the gap is between two number points ( p i and p i ). Furthermore, the two points in B l have distance 2 + (cid:15) , so there is an (cid:15) − n (cid:15) gap between their δ -balls. Thus, there isan i such that p i has distance at most δ to r , and p i has distance at most δ to r . Thisalignment of the gadgets can only be realized by a translation τ for which τ x = 2 i(cid:15) + x i (cid:15) . ± n (cid:15) , which completes the proof. (cid:74)(cid:73) Lemma 13.
Given a low-level gadget A l , B l as constructed above and the translation beingrestricted to τ ∈ [0 , (2 n − (cid:15) ] , it holds that if ∃ i ∈ N : τ x = 2 i(cid:15) + x i (cid:15) . , then δ H ( A l , B l + τ ) ≤ δ . Proof.
Let i ∈ N and let τ x = 2 i(cid:15) + x i (cid:15) . . Consider any translations τ ∈ { τ x } × [0 , n − (cid:15) ].Due to the restricted translation and Lemma 11, we can disregard the error terms that arisefrom the vertical translation τ y as they are compensated for by δ . Then all the points in A l before and including p i are in distance at most δ from r ∈ B l + τ and all the pointsafterwards are in distance at most δ from r ∈ B l + τ . Clearly, both points in B l + τ thenalso have points from A l in distance δ , and thus δ H ( A l , B l + τ ) ≤ δ . (cid:74) This construction is inspired by the hard instance that was given in [27]. We want to obtaina grid of translations of spacing (cid:15) with some microtranslations in the O ( (cid:15) . ) range. Wealready defined the low-level gadget above, and we now define the high-level gadgets. Column Gadget
The column gadget induces columns in translational space, i.e., it enforces that validtranslations have to lie on one of these columns. The column gadget is actually the low-levelgadget we already described above. You can see a sketch of this gadget in Figure 4a. Tosemantically distinguish it from the low-level gadget, we refer to the point sets of the columngadget as A c and B c . Row Gadget
The row gadget induces rows in translational space, i.e., it enforces that valid translationshave to lie on one of these rows. We obtain the row gadget by rotating all points in thelow-level gadget around the origin by π/ A r and B r . Diagonal Gadget
The diagonal gadget induces diagonals in translational space, i.e., it enforces that validtranslations have to lie on one of these diagonals. As opposed to the column and row gadget,the diagonal gadget also has to be scaled. We scale the sets A l and B l separately. We scale A l such that the gap between the number point pairs p i , p i becomes √ (cid:15) . And we scale B l such that the gap between the points becomes 2 + √ (cid:15) . After scaling, we rotate the pointscounterclockwise around the origin by π/
4. You can see a sketch of this gadget in Figure 4c.We call the point sets of the diagonal gadget A d and B d . . Bringmann, A. Nusser 13 (cid:15)δ A c (a) Column Gadget (cid:15)δA r (b) Row Gadget √ (cid:15)δ A d (c) Diagonal Gadget
Figure 4
Three of the high-level gadgets. The points of A are all in the low-level gadgets, whilethe points in B are explicitly shown including their δ -ball. Translation Gadget
To restrict the translations for the directed Hausdorff distance under translation, we introduceanother gadget. The first set of points A t contains z l := ( − n − (cid:15), , z r := (1 , , z b := (0 , − n − (cid:15) ) , z t := (0 , . The second point set B t only contains the origin z c := (0 , (cid:73) Lemma 14.
Given τ ∈ [0 , (2 n − (cid:15) ] , it holds that δ H ( A t , B t + τ ) ≤ δ . Proof. As B t has a point on all sides, clearly δ ~H ( B t + τ, A t ) ≤ δ . Furthermore, k z l − ( z c + τ ) k ≤ n (cid:15) ≤ δ and k z r − ( z c + τ ) k ≤ δ, using Lemma 11. Analogous statements hold for z b and z t . Thus, also δ ~H ( A t , B t + τ ) ≤ δ . (cid:74) To obtain the final sets of the reduction, we now place all four described high-level gadgets(i.e., column gadget, row gadget, diagonal gadget, and translation gadget) far enough apart.More explicitly, the point sets
A, B of the Hausdorff distance under translation instance aredefined as A := A c ∪ ( A r + (10 , ∪ ( A d + (20 , ∪ ( A t + (30 , B := B c ∪ ( B r + (10 , ∪ ( B d + (20 , ∪ ( B t + (30 , . The far placement ensures that the two point sets of the respective gadgets have to bematched to each other for a decision distance δ . First, we want to ensure that everything relevant happens in a very small range of translations. (cid:73)
Lemma 15.
Let τ ∈ R . If δ ~H ( A, B + τ ) ≤ δ , then τ ∈ [0 , (2 n − (cid:15) ] . Proof.
Note that for a Hausdorff distance at most δ , the sets A c and B c have to matched toeach other and analogously for A r , B r , and A d , B d , and A t , B t . To show the contrapositive,now assume τ / ∈ [0 , (2 n − (cid:15) ] . For simplicity, we refer to the points in the high-level gadgetswith the notation of the low-level gadget. Additionally, due to the translation gadget, wehave k z l − ( z c + τ ) k > δ for τ x > (2 n − (cid:15) + 4 n (cid:15) , and k z r − ( z c + τ ) k > δ for τ x < − n (cid:15) . We now show that under these restricted translations and as δ ~H ( A, B + τ ) ≤ δ , bothpoints r , r in B c have at least one point of A c in distance δ . In the column gadget for τ x ∈ [ − n (cid:15) , k ( r + τ ) − p k ≥ |− − ( p ) x + τ x | > δ and k ( r + τ ) − p k ≥ (cid:15) − O ( (cid:15) . ) > δ, for small enough (cid:15) . On the other hand, for τ x ∈ ((2 n − (cid:15), (2 n − (cid:15) + 4 n (cid:15) ], we have k r + τ − p n − k ≥ (cid:15) + τ x − (2 n − (cid:15) > δ and k r + τ − p n − k ≥ (cid:15) − O ( (cid:15) . ) > δ for small enough (cid:15) . An analogous argument holds for the row gadget and τ y , as the rowgadget is just a rotated version of the column gadget and the translation gadget is symmetricwith respect to these gadgets. (cid:74) We can now prove the main result of this section. (cid:73)
Theorem 16.
Computing the directed or undirected Hausdorff distance under translationin L for two sets of size n and O (1) cannot be solved in time O ( n − γ ) for any γ > , unlessthe Hypothesis fails.
Proof.
We construct a Hausdorff under translation instance in this proof from a
Conv3Sum instance as described previously in this section, and then show that they are equivalent. Wefirst consider how to apply Lemma 12 and Lemma 13 to the diagonal gadget. More precisely,we consider which translations align the gaps of A d and B d as is used in these two lemmas.Due to the scaling of the gadget, these translations are of the form √ τ x = 2 k(cid:15) + x k (cid:15) . . Bythe rotation, we then obtain translations of the form √ τ x + τ y ) √ k τ x + τ y k = 2( i + j ) (cid:15) + x i + j (cid:15) . . ⇐ : Assume X is a positive Conv3Sum instance. Then there exist x i , x j such that x i + x j = x i + j . Consider τ = (2 i(cid:15) + x i (cid:15) . , j(cid:15) + x j (cid:15) . ) as translation. Due to Lemma 13, we havethat δ H ( A c , B c + τ ) ≤ δ and analogously δ H ( A r , B r + τ ) ≤ δ . By the initial observationwe can also apply Lemma 13 to the diagonal gadget, and thus δ H ( A d , B d + τ ) ≤ δ . Finally,by Lemma 14, we also have that δ H ( A t , B t + τ ) ≤ δ for the given τ . . Bringmann, A. Nusser 15 ⇒ : Assume δ T~H ( A, B ) ≤ δ . From Lemma 15, it follows that τ ∈ [0 , (2 n − (cid:15) ] . Then, dueto Lemma 12 and the initial observation about the diagonal gadget, we have that thereexist i, j, k that fulfill τ x = 2 i(cid:15) + x i (cid:15) . ± n (cid:15) and τ y = 2 j(cid:15) + x j (cid:15) . ± n (cid:15) and τ x + τ y = 2 k(cid:15) + x k (cid:15) . ± n (cid:15) . It follows that2 i(cid:15) + x i (cid:15) . + 2 j(cid:15) + x j (cid:15) . ± n (cid:15) = 2 k(cid:15) + x k (cid:15) . ± n (cid:15) , and thus i + j = k and x i + x j = x k .It remains to argue why the above reduction implies the lower bound stated in the theorem.Assume we have an algorithm that computes the Hausdorff distance under translation in L in time O ( n − γ ) for some γ >
0. Then, given a
Conv3Sum instance X with | X | = n ,we can use the described reduction to obtain an equivalent Hausdorff under translationinstance with point sets A, B of size | A | = O ( n ) and | B | = O (1) and solve it in time O ( n − γ ),contradicting the Hypothesis. (cid:74)
In this work, we provide matching lower bounds for the running time of two importantcases of the fundamental distance measure Hausdorff distance under translation. Theselower bounds are based on popular standard hypotheses from fine-grained complexity theory.Interestingly, we use two different hypotheses to show hardness. For the Hausdorff distanceunder translation for L and L ∞ , we show a lower bound of ( nm ) − o (1) using the OrthogonalVectors Hypothesis, while for the imbalanced case of m = O (1) in L , we show an n − o (1) lower bound using the Hypothesis. We leave it as an open problem whether Hausdorffdistance under translation for the balanced case admits a strongly subcubic algorithm or ifconditional hardness can be shown.
References Amir Abboud, Arturs Backurs, Karl Bringmann, and Marvin Künnemann. Fine-grainedcomplexity of analyzing compressed data: Quantifying improvements over decompress-and-solve. In Chris Umans, editor, , pages 192–203. IEEE ComputerSociety, 2017. doi:10.1109/FOCS.2017.26 . Amir Abboud, Virginia Vassilevska Williams, and Oren Weimann. Consequences of faster align-ment of sequences. In Javier Esparza, Pierre Fraigniaud, Thore Husfeldt, and Elias Koutsoupias,editors,
Automata, Languages, and Programming - 41st International Colloquium, ICALP 2014,Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part I , volume 8572 of
Lecture Notes inComputer Science , pages 39–51. Springer, 2014. doi:10.1007/978-3-662-43948-7\_4 . Helmut Alt and Michael Godau. Computing the Fréchet Distance between Two Polyg-onal Curves.
Int. J. Comput. Geometry Appl. , 5:75–91, March 1995. doi:10.1142/S0218195995000064 . Amihood Amir, Timothy M. Chan, Moshe Lewenstein, and Noa Lewenstein. On hardness ofjumbled indexing. In Javier Esparza, Pierre Fraigniaud, Thore Husfeldt, and Elias Koutsoupias,editors,
Automata, Languages, and Programming - 41st International Colloquium, ICALP 2014,Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part I , volume 8572 of
Lecture Notes inComputer Science , pages 114–125. Springer, 2014. doi:10.1007/978-3-662-43948-7\_10 . Chanderjit Bajaj. The algebraic degree of geometric optimization problems.
Discrete &Computational Geometry , 3(2):177–191, 1988. Gill Barequet and Sariel Har-Peled. Polygon containment and translational in-hausdorff-distance between segment sets are 3sum-hard.
International Journal of Computational Ge-ometry & Applications , 11(04):465–474, August 2001. URL: , doi:10.1142/S0218195901000596 . Karl Bringmann. Why walking the dog takes time: Frechet distance has no strongly sub-quadratic algorithms unless SETH fails. In , pages 661–670.IEEE Computer Society, 2014. doi:10.1109/FOCS.2014.76 . Karl Bringmann and Marvin Künnemann. Multivariate fine-grained complexity of longestcommon subsequence. In Artur Czumaj, editor,
Proceedings of the Twenty-Ninth AnnualACM-SIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January7-10, 2018 , pages 1216–1235. SIAM, 2018. doi:10.1137/1.9781611975031.79 . Karl Bringmann, Marvin Künnemann, and André Nusser. Fréchet distance under translation:Conditional hardness and an algorithm via offline dynamic grid reachability. In Timothy M.Chan, editor,
Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algo-rithms, SODA 2019, San Diego, California, USA, January 6-9, 2019 , pages 2902–2921. SIAM,2019. doi:10.1137/1.9781611975482.180 . Karl Bringmann, Marvin Künnemann, and André Nusser. When lipschitz walks your dog:Algorithm engineering of the discrete fréchet distance under translation. In Fabrizio Grandoni,Grzegorz Herman, and Peter Sanders, editors, , volume 173 of
LIPIcs , pages25:1–25:17. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.ESA.2020.25 . Karl Bringmann and Wolfgang Mulzer. Approximability of the discrete fréchet distance.
J.Comput. Geom. , 7(2):46–76, 2016. doi:10.20382/jocg.v7i2a4 . Kevin Buchin, Anne Driemel, Natasja van de L’Isle, and André Nusser. klcluster: Center-basedclustering of trajectories. In Farnoush Banaei Kashani, Goce Trajcevski, Ralf Hartmut Güting,Lars Kulik, and Shawn D. Newsam, editors,
Proceedings of the 27th ACM SIGSPATIALInternational Conference on Advances in Geographic Information Systems, SIGSPATIAL 2019,Chicago, IL, USA, November 5-8, 2019 , pages 496–499. ACM, 2019. doi:10.1145/3347146.3359111 . Kevin Buchin, Tim Ophelders, and Bettina Speckmann. SETH says: Weak fréchet distanceis faster, but only if it is continuous and in one dimension. In Timothy M. Chan, editor,
Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA2019, San Diego, California, USA, January 6-9, 2019 , pages 2887–2901. SIAM, 2019. doi:10.1137/1.9781611975482.179 . Timothy M. Chan and Qizheng He. Reducing 3sum to convolution-3sum. In Martin Farach-Colton and Inge Li Gørtz, editors, , pages 1–7. SIAM, 2020. doi:10.1137/1.9781611976014.1 . L. Paul Chew and Klara Kedem. Improvements on geometric pattern matching problems. InOtto Nurmi and Esko Ukkonen, editors,
Algorithm Theory — SWAT ’92 , Lecture Notes inComputer Science, pages 318–325. Springer Berlin Heidelberg, 1992. Mark de Berg, Atlas F. Cook, and Joachim Gudmundsson. Fast fréchet queries.
ComputationalGeometry , 46(6):747 – 755, 2013. URL: , doi:https://doi.org/10.1016/j.comgeo.2012.11.006 . A. Efrat, A. Itai, and M. J. Katz. Geometry Helps in Bottleneck Matching and RelatedProblems.
Algorithmica , 31(1):1–28, September 2001. doi:10.1007/s00453-001-0016-8 . Andriy Fedorov, Eric Billet, Marcel Prastawa, Guido Gerig, Alireza Radmanesh, Simon KWarfield, Ron Kikinis, and Nikos Chrisochoides. Evaluation of brain mri alignment with the . Bringmann, A. Nusser 17 robust hausdorff distance measures. In
International Symposium on Visual Computing , pages594–603. Springer, 2008. Anka Gajentaan and Mark H. Overmars. On a class of O ( n ) problems in computationalgeometry. Comput. Geom. , 5:165–185, 1995. doi:10.1016/0925-7721(95)00022-2 . Felix Hausdorff.
Grundzüge der Mengenlehre , volume 7. von Veit, 1914. Daniel P Huttenlocher, Klara Kedem, and Micha Sharir. The upper envelope of voronoisurfaces and its applications.
Discrete & Computational Geometry , 9(3):267–291, 1993. Russell Impagliazzo, Ramamohan Paturi, and Francis Zane. Which problems have stronglyexponential complexity?
J. Comput. Syst. Sci. , 63(4):512–530, 2001. doi:10.1006/jcss.2001.1774 . James King. A survey of 3sum-hard problems. 2004. Meinard Müller.
Information retrieval for music and motion . Springer, 2007. doi:10.1007/978-3-540-74048-3 . Mihai Patrascu. Towards polynomial lower bounds for dynamic problems. In
Proceedings of theForty-Second ACM Symposium on Theory of Computing , STOC ’10, page 603–610, New York,NY, USA, 2010. Association for Computing Machinery. doi:10.1145/1806689.1806772 . Günter Rote. Computing the minimum Hausdorff distance between two point setson a line under translation.
Information Processing Letters , 38(3):123–127, May 1991.URL: , doi:10.1016/0020-0190(91)90233-8 . W. J. Rucklidge. Lower bounds for the complexity of the graph of the Hausdorff distance as afunction of transformation.
Discrete & Computational Geometry , 16(2):135–153, February1996. doi:10.1007/BF02716804 . Virginia Vassilevska Williams. On some fine-grained questions in algorithms and complexity.In
Proc. ICM , volume 3, pages 3431–3472. World Scientific, 2018. Ryan Williams. A new algorithm for optimal 2-constraint satisfaction and its implications.
Theor. Comput. Sci. , 348(2-3):357–365, 2005. doi:10.1016/j.tcs.2005.09.023doi:10.1016/j.tcs.2005.09.023