Monochromatic Triangles, Triangle Listing and APSP
aa r X i v : . [ c s . CC ] J u l Monochromatic Triangles, Triangle Listing and APSP
Virginia Vassilevska [email protected] Yinzhan [email protected]
Abstract
All-Pairs Shortest Paths (APSP) is one of the most basic problems in graph algorithms. In one ofthe most general variants of the problem, one is given an n -node directed or undirected graph withinteger weights in {− n c , . . . , n c } and no negative cycles and one needs to compute the shortest pathsdistance between every pair of vertices. A central question in graph algorithms is how fast APSP can besolved. The fastest known algorithm runs in n / Θ( √ log n ) time [Williams’14], and no truly subcubictime algorithms are known.One of the main hypotheses in fine-grained complexity is that this problem requires n − o (1) time.Another famous hypothesis in fine-grained complexity is that the SUM problem for n integers (whichcan be solved in O ( n ) time) requires n − o (1) time. Although there are no direct reductions between SUM and APSP, it is known that they are related: there is a problem, (min , +) -convolution that reducesin a fine-grained way to both, and a problem Exact Triangle that both fine-grained reduce to.In this paper we find more relationships between these two problems and other basic problems.P˘atras¸cu had shown that under the SUM hypothesis the All-Edges Sparse Triangle problem in m -edgegraphs requires m / − o (1) time. The latter problem asks to determine for every edge e , whether e is in atriangle. It is equivalent to the problem of listing m triangles in an m -edge graph where m = ˜ O ( n . ) ,and can be solved in O ( m . ) time [Alon et al.’97] with the current matrix multiplication bounds, andin ˜ O ( m / ) time if ω = 2 .We show that one can reduce Exact Triangle to All-Edges Sparse Triangle, showing that All-EdgesSparse Triangle (and hence Triangle Listing) requires m / − o (1) time also assuming the APSP hypoth-esis. This allows us to provide APSP-hardness for many dynamic problems that were previously knownto be hard under the SUM hypothesis.We also consider the previously studied All-Edges Monochromatic Triangle problem. Via work of[Lincoln et al.’20], our result on All-Edges Sparse Triangle implies that if the All-Edges MonochromaticTriangle problem has an O ( n . − ε ) time algorithm for ε > , then both the APSP and SUM hypothe-ses are false. The fastest algorithm for All-Edges Monochromatic Triangle runs in ˜ O ( n (3+ ω ) / ) time[Vassilevska et al.’06], and our new reduction shows that if ω = 2 , this algorithm is best possible, unless SUM or APSP can be solved faster. Besides SUM, previously, the only problems known to be fine-grained reducible to All-Edges Monochromatic Triangle were the seemingly easier problems directedunweighted APSP and Min-Witness Product [Lincoln et al.’20]. Our new reduction shows that this prob-lem is much harder. We also connect the problem to other “intermediate” problems, whose runtimes arebetween O ( n ω ) and O ( n ) , such as the Max-Min product problem. Introduction
All-Pairs Shortest Paths (APSP) is one of the most fundamental problems in graph algorithms. In one ofthe most general variants of the problem, one is given an n -node directed or undirected graph with integerweights in {− n c , . . . , n c } with no negative cycles and one needs to compute the shortest paths distancebetween every pair of vertices. A central question in graph algorithms is how fast APSP in n node graphscan be solved. The fastest known algorithm runs in n / Θ( √ log n ) time [29, 30], and no O ( n − ε ) time for ε > , so called truly subcubic algorithms are known.One of the main hypotheses in fine-grained complexity, the APSP hypothesis, is that APSP requires n − o (1) time . Another famous hypothesis is the SUM hypothesis that the SUM problem for n integersin {− n c , . . . , n c } requires n − o (1) time. SUM asks if three of the integers sum to and can be solved in O ( n ) time. Although there are no direct reductions between SUM and APSP, it is known that they arerelated, in a sense much more related than the third core problem of fine-grained complexity OrthogonalVectors (OV). For instance, the (min , +) -convolution problem is known to be fine-grained reducible toboth SUM and APSP, so that if APSP is in truly subcubic time, or SUM is in truly subquadratic time,then (min , +) -convolution is in truly subquadratic time [6, 21, 22, 31, 32, 33]. Further, the Exact Triangleproblem is known to be harder than both APSP and SUM [31, 33], so that if it has a truly subcubic timealgorithm, then both the APSP and the SUM hypotheses would be false. Meanwhile, it is not known how SUM and APSP (or (min , +) -convolution and Exact Triangle) are related to OV .In this paper we provide more relationships to other problems that SUM and APSP have in common.P˘atras¸cu [21] showed that SUM can be reduced in truly subquadratic time to the All-Edges Sparse Triangleproblem of determining for every edge e in an n -node, m = ˜ O ( n . ) -edge graph, whether e is in a triangle.P˘atras¸cu’s reduction implies that under the SUM hypothesis, All-Edges Sparse Triangle requires m / − o (1) time. All-Edges Sparse Triangle is known to be equivalent to the problem of listing m triangles in an m -edge graph where m = ˜ O ( n . ) [10], and can be solved in O ( m . ) time [4] with the current matrixmultiplication bounds [11, 26], and in ˜ O ( m / ) time if the exponent of square matrix multiplication ω is .Our main result is a reduction from Exact Triangle to All-Edges Sparse Triangle, thus showing thatAll-Edges Sparse Triangle (and hence Triangle Listing) requires m / − o (1) time also assuming the APSPhypothesis. This allows us to provide APSP-hardness for many dynamic problems.We also consider the previously studied All-Edges Monochromatic Triangle problem (AE-Mono ∆ ) [18,23, 25] in which one is given an n -node graph G with colors on the edges, and one is asked to return forevery edge e , whether it appears in a monochromatic triangle in G . The fastest algorithm for AE-Mono ∆ runs in ˜ O ( n (3+ ω ) / ) time [23, 25]. Via work of [18], our reduction from Exact Triangle to All-Edges SparseTriangle implies that if AE-Mono ∆ has an O ( n . − ε ) time algorithm for ε > , then both the APSP and SUM hypotheses are false. This shows that if ω = 2 , the known algorithm for AE-Mono ∆ is best possible,unless SUM and APSP can both be solved faster.[18] showed that SUM is in fact fine-grained equivalent to the
Monochromatic Convolution problem,which is the convolution version of AE-Mono ∆ . These two latter problems are very related (e.g. the knownalgorithms for them are analogous, the only difference being the use of FFT vs Fast Matrix Multiplication),to the extent that one might conjecture that they are fine-grained equivalent. If this bold conjecture weretrue, then SUM would be equivalent to both problems, and thus APSP would reduce to SUM. We leavedetermining the veracity of this conjecture to future work. All of the hypotheses are for the Word-RAM model of computation with O (log n ) bit words. The Min-Weight k -Clique problem can be reduced to both APSP and moderate-dimension OV [1], but it is not known whetherit can be reduced to OV. The ˜ O notation in this paper hides poly-logarithmic factors. SUM, previously, the only problems known to be fine-grained reducible to AE-Mono ∆ werethe seemingly easier problems directed unweighted APSP and Min-Witness Product [18]. Our new reductionshows that this problem is much harder. We also connect the problem to multiple so-called “intermediate”problems, whose runtimes are between O ( n ω ) and O ( n ) , such as the Max-Min Product problem that hasbeen studied in relation to All Pairs Bottleneck Paths [9, 24]. Here we give more details about the results summarized above.
Reductions from Exact Triangle.
The Exact Triangle problem asks, given an n -node graph with integeredge weights in {− n c , . . . , n c } for some constant c and a target T , whether there are three vertices p, q, r so that w ( p, q ) + w ( q, r ) + w ( r, p ) = T . Exact Triangle is equivalent to the version of Exact Triangle inwhich the target T is [33]. This is called the Zero Triangle problem (also known as Zero- -Clique or LoveTriangle).The brute-force algorithm for Exact Triangle runs in O ( n ) time. Meanwhile, as mentioned earlier,an O ( n − ε ) time algorithm for ε > for Exact Triangle would violate both the APSP hypothesis and the SUM hypothesis. The
Exact Triangle hypothesis states that Exact Triangle requires n − o (1) time in theword-RAM model of computation with O (log n ) bit words. This hypothesis is at least as believable as thehypothesis that at least one of the SUM and APSP hypotheses is true.Our main result is a reduction from Exact Triangle to certain unbalanced versions of Triangle List-ing, and All-Edges Triangle Listing in sparse graphs which can then easily be reduced to other problems,including All-Edges Sparse Triangle, AE-Mono ∆ , SetDisjointness and SetIntersection.In our unbalanced triangle problems there are five parameters, α, β, γ, ρ, t . One is given a tripartitegraph where the three parts have n α , n β , n γ vertices, respectively. The fourth parameter ρ controls the edgedensity of the graph. Roughly speaking, each vertex in the graph has an n − ρ fraction of vertices as itsneighbors. In the parameterized version of All-Edges Triangle Listing one is asked to list for every edge e , t triangles containing e (or all triangles containing e if there are fewer than t triangles). In our parameterizedversion of Triangle Listing, one is asked to list t triangles in the graph (or all triangles if there are fewer than t triangle).The statements of our reductions to the parameterized versions of Triangle Listing and All-Edges Trian-gle Listing are a bit technical (see Theorem 3.4 in the Section 3). We will list some consequences. Corollary 1.1.
Any n -node instance of Exact Triangle can be reduced in ˜ O ( n . ) time to ˜ O ( n ) instances ofAll-Edges Sparse Triangle on O ( n . ) edges each. Thus, assuming the Exact Triangle hypothesis, there isno O ( m / − ǫ ) time algorithm for All-Edges Sparse Triangle for any ǫ > . Lincoln et al. [18] showed that computing a certain number of independent instances of All-EdgesSparse Triangle is equivalent to AE-Mono ∆ . Theorem 1.2. [18] Computing m / independent instances of All-Edges Sparse Triangle with m edgeseach, where the number of vertices in each instance is m / is equivalent up to poly-logarithmic factors tocomputing an AE-Mono ∆ instance where the number of vertices is n = O ( m / ) . Combining Corollary 1.1 and Theorem 1.2, we get the following conditional lower bound for AE-Mono ∆ . Corollary 1.3.
Assuming the Exact Triangle hypothesis, there is no O ( n . − ǫ ) algorithm for AE-Mono ∆ on n -node graphs for any ǫ > .
3e also achieve the following conditional lower bound for even sparser triangle detection problems, asanother corollary of our main reduction.
Corollary 1.4.
Let A be an algorithm for All-Edges Sparse Triangle for n -node graphs where every nodehas degree at most d = n δ for some < δ ≤ . . Assuming the Exact Triangle hypothesis, A cannot run in O ( n − ǫ d ) time for any ǫ > . Many SUM-hard problems are now also APSP-hard.
Kopelowitz, Pettie and Porat [14] consideredthe SetDisjointness and SetIntersection problems. They showed SUM hardness for both SetDisjointnessand SetIntersection, which were in turn used to show SUM hardness for many dynamic graph problems.The SetDisjointness problem can be viewed as an All-Edges Sparse Triangle problem in constrainedgraphs, and the SetIntersection problem can be viewed as a triangle listing problem. Using our reductionsfrom Exact Triangle to All-Edges Sparse Triangle and Triangle Listing, we show that the SetDisjointnessand SetIntersection problems are hard under the Exact Triangle hypothesis. Thus, we immediately getAPSP-hardness for a variety of problems that were previously known to be SUM-hard, including DynamicMaximum Cardinality Matching, Incremental Maximum Cardinality Matching, d -Failure Connectivity andTriangle Enumeration in graphs with particular arboricity [14]. One example result is the following. Corollary 1.5.
Assume the Exact Triangle hypothesis (or any one of the SUM hypothesis or the APSPhypothesis). For any constants y ∈ (0 , / , x ∈ (0 , y ] , there exists a constant ǫ > so that for graphs with n vertices, m edges, arboricity α = Θ( n x ) = Θ( m y ) , and t < mα − ǫ triangles, it requires Ω( mα − o (1) ) time to list all triangles in the graph. Notably, the SetIntersection problem of Kepolowitz et al. is a more generalized version of the All-EdgesSparse Triangle problem considered by P˘atras¸cu [21] and later by Abboud and Williams [2]. Therefore, allconditional lower bound results in [21] and [2] that use All-Edges Sparse Triangle as a source of hardnessalso hold under the Exact Triangle hypothesis. These results include (but are not limited to) dynamic st reachability, dynamic st shortest paths, dynamic strong connectivity, subgraph connectivity, Langerman’sproblem, Pagh’s problem and Erickson’s problem. An example result from All-Edges Sparse Triangle is thefollowing. Corollary 1.6.
Assume the Exact Triangle hypothesis (or any one of the SUM hypothesis or the APSP hy-pothesis). Then there is no fully dynamic algorithm for s - t reachability that can have O ( m / − ǫ ) processingtime, O ( m a − ǫ ) update time, and O ( m / − a − ǫ ) query time for some ǫ > and / ≤ a ≤ / . Readers interested in reductions from triangle problems to dynamic graphs problems can check [2, 14,21] for more details.
Reductions for intermediate problems.
The notion of “intermediate” problems were first devised byLincoln et al. [18] who studied problems whose current best running time is ˜ O ( n . ) if ω = 2 . With thecurrent bounds on rectangular matrix multiplication [17], the running time of these problems vary. The eas-iest “intermediate” problems seem to be the unweighted directed APSP [35] problem and the Min-WitnessProduct problem [7], whose best algorithms run in ˜ O ( n . ) time using the best bounds on rectangularmatrix multiplication [17]. Following them in complexity, there are the Equality Product problem [15, 28]and the Dominance Product problem [20, 34], which are known to have ˜ O ( n . ) time algorithms.The Equality Product of two n × n integer matrices A and B is the n × n matrix C with C [ i, j ] = |{ k | A [ i, k ] = B [ k, j ] }| . The Boolean version of the problem, ∃ Equality Product asks to determine whether |{ k | A [ i, k ] = B [ k, j ] }| > , for each i, j . The Dominance Product of integer matrices A and B is thematrix C with C [ i, j ] = |{ k | A [ i, k ] ≤ B [ k, j ] }| . The Boolean version of the problem, ∃ Dominance4roduct asks whether |{ k | A [ i, k ] ≤ B [ k, j ] }| > , for each i, j . While the regular versions of EqualityProduct and Dominance Product are known to be equivalent [15, 28], their Boolean versions are not knownto be, although ∃ Dominance Product can be reduced to O (log n ) instances of ∃ Equality Product [15, 28].There are also several intermediate problems that do not have any improvement brought by fast rectan-gular matrix multiplication, and whose running times are all ˜ O ( n (3+ ω ) / ) = ˜ O ( n . ) . There problemsinclude for instance, the aforementioned AE-Mono ∆ problem, the (min , ≤ ) -product problem, the Max-Min Product problem studied in relation to the All-Pairs Bottleneck Paths and All-Pairs NondecreasingPaths [8, 9, 24], and the related (min , =) -product which we introduce in this paper.Lincoln et al. [18] gave reductions from unweighted directed APSP to Max-Min Product and AE-Mono ∆ , showing that if there exists a T ( n ) time algorithm for Max-Min Product or AE-Mono ∆ , then onecan also obtain an ˜ O ( T ( n )) time algorithm for unweighted directed APSP. They also give reductions fromMin-Witness Product to Max-Min Product and AE-Mono ∆ .These reductions are not tight when ω > , since they are reductions from a seemingly easier problem(for which improvements via rectangular matrix multiplication are known) to a harder problem (for which noimprovements via rectangular matrix multiplication are known). For instance, in order to use the reductionto AE-Mono ∆ to get a better algorithm for unweighted directed APSP, one would need to obtain a betterthan ˜ O ( n . ) time algorithm for AE-Mono ∆ . This doesn’t seem doable with current techniques. Hencea natural question is whether one can obtain tight reductions between some of the intermediate problems.As a first result in this direction, we show that AE-Mono ∆ is the hardest “intermediate” problem amongthe ones mentioned above, as all of the problems can be reduced to it. A key step in the reduction is to studya new problem called All-Edges Monochromatic Equality Triangle (AE-MonoEq ∆ ), which can be viewedas a combination of AE-Mono ∆ and ∃ Equality Product. We delay its formal definition to Section 2. As thefirst step in the reductions, we reduce AE-Mono ∆ to AE-MonoEq ∆ . Theorem 1.7.
If AE-Mono ∆ has an ˜ O ( n (3+ ω ) / − ǫ ) time algorithm for ≤ ǫ ≤ ω − , then AE-MonoEq ∆ has an ˜ O ( n (3+ ω ) / − δ ) time algorithm for δ ≥ . Moreover, if ǫ > then δ > . If we use ǫ = 0 in Theorem 1.7, we immediately get an ˜ O ( n (3+ ω ) / ) time algorithm for AE-MonoEq ∆ ,showing that AE-MonoEq ∆ is indeed another “intermediate” problem. If ω > , we can use ǫ > inTheorem 1.7 to get that any slight improvements over the current best algorithm for AE-Mono ∆ implies animproved algorithm for AE-MonoEq ∆ .As suggested by the names of the problems, AE-Mono ∆ is a special version of AE-MonoEq ∆ (SeeSection 4 for the formal definition), so that any runtime improvements for AE-MonoEq ∆ imply runtime im-provements of AE-Mono ∆ . Thus, AE-MonoEq ∆ and AE-Mono ∆ are sub- n (3+ ω ) / -fine-grained equivalentwhen ω > .Next, we show that one can reduce all the “intermediate” problems mentioned above to AE-MonoEq ∆ . Theorem 1.8.
Suppose there exists a T ( n ) time algorithm for AE-MonoEq ∆ , then there exist ˜ O ( T ( n )) timealgorithms for all of the following: • unweighted directed APSP, • Min-Witness Product, • ∃
Equality Product, • ∃
Dominance Product, • (min , =) -product, • Max-Min Product, (min , ≤ ) -product. Some reductions in the above theorem were already known. First of all, both Max-Min Product and ∃ Dominance Product can be reduced to (min , ≤ ) -product [24]. Next, both Min Witness Product and un-weighted directed APSP can be reduced to Max-Min Product [18]. As mentioned earlier, ∃ DominanceProduct reduces to ∃ Equality Product, and the latter easily reduces to (min , =) -product. Thus, to proveTheorem 1.7, it suffices to give reductions from (min , =) -product and (min , ≤ ) -product to AE-MonoEq ∆ .Since AE-MonoEq ∆ has an ˜ O ( n (3+ ω ) / ) time algorithm, Theorem 1.7 provides alternative algorithmsfor (min , =) -product and Max-Min Product. These new algorithms are also potentially simpler as they donot involve dealing with sparse matrix products, which were the main source of difficulty in the previous ˜ O ( n (3+ ω ) / ) time algorithms for the problemsCombining Theorem 1.7 and Theorem 1.8, we obtain that AE-Mono ∆ is the hardest “intermediate”problem among all “intermediate” problems considered in [18], in the sense that if there is any improvementof AE-Mono ∆ over the ˜ O ( n (3+ ω ) / ) running time when ω > , there will also be improvements for (min , =) -product, Max-Min Product and (min , ≤ ) -product.AE-MonoEq ∆ can be viewed as many independent instances of a problem called AE-Eq ∆ , in whichwe are given a graph G with edge values, and we are asked to decide for each edge e in the graph, whetherit is in a triangle such that at least two of its three edges share the same edge value. Via techniques usedin the proof of Theorem in [18], we can show that computing a single instance of AE-MonoEq ∆ of size n is equivalent to, up to poly-logarithmic factors, computing a certain number of instances of AE-Eq ∆ ongraphs with n vertices where the total number of edges across all instances is Θ( n ) .Motivated by the simple nature of AE-Eq ∆ and its relationship to our AE-MonoEq ∆ problem, weconsider the monochromatic versions of other intermediate problems. The most interesting of these arearguably the monochromatic versions of ∃ Equality Product and (min , =) -product which we call Monochro-matic Equality Product and Monochromatic (min , =) -product.In the Monochromatic Equality Product problem (MonoEq), we are given a tripartite graph G on vertexparts I ∪ J ∪ K . Each edge e in G has a color c ( e ) . All edges e in I × K and J × K has a value v ( e ) .For every ( i, j ) , we need to decide if there exists k such that v [ i, k ] = v [ j, k ] and c [ i, k ] = c [ j, k ] = c [ i, j ] .MonoEq can be regarded as a combination of many sparse ∃ Equality Product instances, where we are givensparse matrices A and B , and we need to compute their ∃ Equality Product result on a small number ofentries. MonoEq is a special case of AE-MonoEq ∆ , so there exists an ˜ O ( n (3+ ω ) / ) time algorithm for it.The input to the Monochromatic (min , =) -product (MonoMinEq) problem is the same as the input toMonoEq. For the output, instead of only determining for each ( i, j ) the existence of k such that v [ i, k ] = v [ j, k ] and c [ i, k ] = c [ j, k ] = c [ i, j ] , we also have to output the minimum value of such a v [ i, k ] . MonoMinEqcan be viewed as combination of many sparse (min , =) -product instances.The best known algorithms for ∃ Equality Product and (min , =) -product have different running times,and it is unclear whether they are equivalent; clearly ∃ Equality reduces to (min , =) -product, but a reductionin the other direction would imply an improvement over the known algorithms for (min , =) -product.Surprisingly, we are able to show that the monochromatic versions, MonoEq and MonoMinEq, areequivalent up to poly-logarithmic factors. Theorem 1.9.
If there exists a T ( n ) time algorithm for Monochromatic Equality Product, then there existsan ˜ O ( T ( n )) time algorithm for Monochromatic (min , =) -product, and vice versa. Theorem 1.9 also implies an ˜ O ( n (3+ ω ) / ) time algorithm for MonoMinEq.6 Preliminaries
In this section, we recall formal definitions of problems considered in this paper and define notationsused in the proofs.
Definition 2.1 ( SUM) . Given n integers in {− n c , . . . , n c } for constant c , determine if three of the integerssum to . Conjecture 2.2 ( SUM hypothesis) . In the word-RAM model with O (log n ) bit words, there is no O ( n − ǫ ) for ǫ > time algorithm for SUM.
Definition 2.3 (APSP) . Given an n -node directed graph with integer weights in {− n c , . . . , n c } and nonegative cycles, compute the shortest paths distance between every pair of vertices. Conjecture 2.4 (APSP hypothesis) . In the word-RAM model with O (log n ) bit words, there is no O ( n − ǫ ) for ǫ > time algorithm for APSP. Definition 2.5 (Exact Triangle, Exact ∆ ) . Given an n -node graph with integer edge weights in {− n c , . . . , n c } for some constant c and a target T , decide whether there are three vertices p, q, r so that w ( p, q ) + w ( q, r ) + w ( r, p ) = T . Conjecture 2.6 (Exact Triangle hypothesis) . In the word-RAM model with O (log n ) bit words, there is no O ( n − ǫ ) for ǫ > time algorithm for Exact Triangle. It is known that either the SUM hypothesis or the APSP hypothesis implies the Exact Triangle hypoth-esis [31, 33].
Definition 2.7 (Zero Triangle, Zero ∆ ) . Given an n -node graph with integer edge weights in {− n c , . . . , n c } for some constant c , decide whether there are three vertices p, q, r so that w ( p, q ) + w ( q, r ) + w ( r, p ) = 0 It is known that Exact Triangle is sub-cubic fine-grained equivalent to Zero Triangle [33].
Definition 2.8 (All-Edges Sparse Triangle, AE-Sparse ∆ ) . Given an n -node m -edge graph G = ( V, E ) ,determining for every edge e ∈ E whether e is in a triangle. Definition 2.9 (All-Pairs Shortest Paths in directed unweighted graphs, UnweightedAPSP) . Given an n -node directed unweighted graph G = ( V, E ) , compute the shortest paths distance between every pair ofvertices. we define two problems investigated in [14]: SetDisjointness and SetIntersection. Definition 2.10 (SetDisjointness) . Given a universe U , a family F ⊆ U of subsets of U , and q pairs ofqueries ( S, S ′ ) ∈ F × F , determine for each query ( S, S ′ ) whether S ∩ S ′ is empty. Definition 2.11 (SetIntersection) . Given a universe U , a family F ⊆ U of subsets of U , q pairs of queries ( S, S ′ ) ∈ F ×F and a number T , output elements of S ∩ S ′ for each query ( S, S ′ ) . It is allowed to terminatethe algorithm once it outputs T elements in total. .4 Matrix Product Problems Definition 2.12 (Minimum Witness matrix product, MinWitness) . Given two n × n Boolean matrices
A, B ,compute an n × n matrix C such that C i,j = min( { k : A i,k = B k,j = 1 } ∪ {∞} ) . Definition 2.13 (Max-Min Product, (max , min) -product) . Given two n × n integer matrices A, B , computean n × n matrix C such that C i,j = max k min { A i,k , B k,j } . We define a generic ( ⊕ , ⊗ ) -product, where ⊕ maps a set of integers to an integer, and ⊗ maps twointegers to a Boolean value. Definition 2.14 ( ( ⊕ , ⊗ ) -product) . Given two n × n integer matrices A, B , compute an n × n matrix C suchthat C i,j = ⊕ ( { B k,j : A i,k ⊗ B k,j } k ) . We can define operation
NonEmpty that returns if a set is nonempty, and otherwise. We can thereforedefine ∃ Equality Product as ( NonEmpty , =) -product, and define ∃ Dominance Product as ( NonEmpty , ≤ ) -product.If we define min( ∅ ) = ∞ and max( ∅ ) = −∞ , then (min , =) -product, (min , ≤ ) -product and (max , ≤ ) -product all fall into this generic definition without ambiguity. Definition 2.15 (All-Edges Monochromatic Triangle, AE-Mono ∆ ) . Given an n -node graph G = ( V, E ) ,where each edge e ∈ E has a color c ( e ) . Determine for every edge e , whether it appears in a monochromatictriangle in G . Definition 2.16 (All-Edges Monochromatic Equality Triangle, AE-MonoEq ∆ ) . Given an n -node graph G = ( V, E ) , where each edge e ∈ E has a color c ( e ) and a value v ( e ) . Determine for every edge e ,whether it appears in a monochromatic triangle in G that at least two of its edges share the same value. Definition 2.17 (Monochromatic Equality Product, MonoEq) . Given a graph G = ( I ∪ J ∪ K, E ) , where | I | = | J | = | K | = n . Each edge e in the graph has a color c ( e ) . Each edge e in I × K and J × K has avalue v ( e ) . For every ( i, j ) , decide if there exists k such that v ( i, j ) = v ( j, k ) and c ( i, k ) = c ( j, k ) = c ( i, j ) . Definition 2.18 (Monochromatic (min , =) -product, MonoMinEq) . Given a graph G = ( I ∪ J ∪ K, E ) ,where | I | = | J | = | K | = n . Each edge e in the graph has a color c ( e ) . Each edge e in I × K and J × K has a value v ( e ) . For every ( i, j ) , compute min ( { v ( i, k ) : v ( i, k ) = v ( j, k ) ∧ c ( i, k ) = c ( j, k ) = c ( i, j ) } k ∪ {∞} ) . Definition 2.19 (Monochromatic (min , ≤ ) -product) . Given a graph G = ( I ∪ J ∪ K, E ) , where | I | = | J | = | K | = n . Each edge e in the graph has a color c ( e ) . Each edge e in I × K and J × K has a value v ( e ) .For every ( i, j ) , compute min ( { v ( j, k ) : v ( i, k ) ≤ v ( j, k ) ∧ c ( i, k ) = c ( j, k ) = c ( i, j ) } k ∪ {∞} ) . .6 Notations For a graph G = ( V, E ) , a node v ∈ V and U ⊆ V , we use deg( v, U ) to denote |{ u ∈ U : ( v, u ) ∈ E }| .We use ω ( a, b, c ) to denote the rectangular matrix multiplication exponent, i.e. the smallest real number z such that the time to multiply an n a × n b matrix by an n b × n c matrix is O ( n z + ε ) for all ε > . In particular,let ω = ω (1 , , be the exponent for square matrix multiplication. It is known that ω ∈ [2 , . [11, 26].The best known bounds for ω ( a, b, c ) are in [17]. O ( n ) APSP O ( n ) Exact ∆ Zero ∆ [ m / ] × AE-Sparse ∆ AE-Sparse ∆ AE-Mono ∆ SetIntersectionSetDisjointness O ( n ) [33][31] m / trivial m / [18] n / Figure 1: The main reduction in Section 3 is from Exact ∆ /Zero ∆ to parameterized versions of triangledetection and triangle listing. This reduction implies hardness for SetDisjointness, SetIntersection, AE-Sparse ∆ and AE-Mono ∆ .We define a parameterized version of Triangle Listing. In this version, the graph has three parts ofvertices. Each of the three parts can have different sizes, but edge densities between any pair of parts are thesame. Definition 3.1 ( ( α, β, γ, ρ, t ) -All-Edges Triangle Listing) . Given a tripartite graph G whose vertex set is A ∪ B ∪ C , such that | A | = n α , | B | = n β , | C | = n γ . Let X, Y ∈ {
A, B, C } be two different parts of thegraph. For any v ∈ X , deg( v, Y ) ≤ O ( n − ρ | Y | ) . The ( α, β, γ, ρ, t ) -All-Edges Triangle Listing problemasks to list, for each e ∈ E ∩ ( A × B ) , all triangles containing e if there are fewer than t such triangles or t distinct triangles containing e is there are at least t such triangles. Definition 3.1 defines the triangle listing problem slightly differently from the usual definition. In pre-vious works (e.g. [5, 10]), the algorithm for triangle listing is required list up to T triangles in the the wholegraph, while ( α, β, γ, ρ, t ) -All-Edges Triangle Listing asks to list up to t triangles for each edge.We also define an unbalanced triangle listing problem when we have to list up to T triangles globally. Definition 3.2 ( ( α, β, γ, ρ, T ) Triangle Listing) . Given a tripartite graph G whose vertex set is A ∪ B ∪ C ,such that | A | = n α , | B | = n β , | C | = n γ . Let X, Y ∈ {
A, B, C } be two different parts of the graph. Forany v ∈ X , deg( v, Y ) ≤ O ( n − ρ | Y | ) . The ( α, β, γ, ρ, T ) Triangle Listing problem asks to list all trianglesin G if there are fewer than T triangles or list T distinct triangles in G if there are at least T triangles. Triangle Listing and All-Edges Triangle Listing are strongly related problems. For instance, it canbe shown that ( α, β, γ, ρ, t ) -All-Edges Triangle Listing can be reduced to, up to polylogarithmic factors, ( α, β, γ, ρ, n α + β − ρ t ) Triangle Listing, by a reduction similar to the reduction for Theorem 15 in [10]. Thisreduction means that if we have hardness for ( α, β, γ, ρ, t ) -All-Edges Triangle Listing, then we also have9ardness for ( α, β, γ, ρ, T ) Triangle Listing when T = n α + β − ρ t . However, this argument won’t work when T < n α + β − ρ , since it would require us to set t < , which doesn’t make sense. Therefore, to circumvent thisdifficulty, we will directly reduce Exact Triangle to both Triangle Listing and All-Edges Triangle Listing.The triangle listing problems require us to actually list triangles for some edge e . However, manyproblems we consider, including All-Edges Sparse Triangle and AE-Mono ∆ , only require the algorithms tooutput whether some triangle exists containing edge e . Thus, in order to reduce to these problems, we definethe following version of triangle detection. Definition 3.3 ( ( α, β, γ, ρ ) -All-Edges Sparse Triangle) . Given a tripartite graph G whose vertex set is A ∪ B ∪ C , such that | A | = n α , | B | = n β , | C | = n γ . Let X, Y ∈ {
A, B, C } be two different parts of thegraph. For any v ∈ X , deg( v, Y ) ≤ O ( n − ρ | Y | ) . The ( α, β, γ, ρ ) -All-Edges Sparse Triangle problem asksto determine, for each e ∈ E ∩ ( A × B ) , whether e is in a triangle or not. Now we are ready to present the reduction from Exact Triangle to triangle listing problems.
Theorem 3.4.
Fix constants ≤ α, β, γ ≤ , ρ < min { α, β, γ } . There exists an ˜ O ( n − min { α,β,γ } + ρ ) time randomized reduction from a Zero Triangle instance with n vertices to ˜ O ( n − α − β − γ +2 ρ ) instancesof ( α, β, γ, ρ, n γ − ρ + 1) -All-Edges Triangle Listing. Similarly, there is also an ˜ O ( n − min { α,β,γ } + ρ ) time randomized reduction from a Zero Triangle instance with n vertices to ˜ O ( n − α − β − γ +2 ρ ) instances of ( α, β, γ, ρ, n α + β + γ − ρ + 1) Triangle Listing.Proof.
We will first provide the reduction to All-Edges Triangle Listing. The reduction to Triangle Listingcan be obtained via slight modifications.
Step 1:
Fix a Zero Triangle instance G . We can randomly assign its vertices to one of three colors, and onlykeep edges whose two endpoints have different colors. If we repeat Θ(log n ) times, a zero triangle in G willremain at least one of the graphs. Thus it suffices to solve Zero Triangle on a tripartite graph.Suppose G is a tripartite graph with vertex parts A, B, C . First, we split vertex parts
A, B, C to partsof size n α , n β , n γ respectively. We could enumerate all n − α − β − γ triples of parts, and detect zero trianglewithin each triple of parts. In the remainder of the reduction, it suffices to reduce each individual unbalancedzero triangle instance of vertex set sizes | A | = n α , | B | = n β , | C | = n γ to ( α, β, γ, ρ, t ) -All-Edges TriangleListing instances. Step 2:
We assume the edge weights w ( · , · ) in the Zero Triangle instance are integers whose absolute values arebounded by n k for some constant k ≥ . We pick an arbitrary prime p that is between n k and Dn k log n for large enough constant D . By the Prime Number Theorem, a random integer in this range is a primewith probability Ω(1 / log n ) , so it takes ˜ O (1) time to find such a prime by randomly picking integers in thisrange and test its primality. After we determine p , we can regard all the weights of the graph as elements in F p by taking the weight of every edge modulo p . Since the weight of each triangle is in [ − n k , n k ] while p ≥ n k , the set of zero triangles with respect to the new weights stays the same. Step 3:
Let x ∈ F p be a random element from F p , and let y v ∈ F p be random elements from F p for every v ∈ A ∪ B ∪ C . As illustrated in Figure 2, for every edge e = ( a, b ) ∈ E ∩ ( A × B ) , we set its newweight to w ′ ( a, b ) = xw ( e ) − y b + y a ; for every edge e = ( a, c ) ∈ E ∩ ( A × C ) , we set its new weightto w ′ ( a, c ) = xw ( e ) − y a + y c ; for every edge e = ( b, c ) ∈ E ∩ ( B × C ) , we set its new weight to w ′ ( b, c ) = xw ( e ) − y c + y b . Let the graph with the new weights be G ′ .Clearly, as long as x = 0 , the set of zero triangles with weights w ′ ( · , · ) is exactly the same as the set ofzero triangles with weights w ( · , · ) . Thus, false positives occur with probability p ≤ n k ≤ . .10 a bx · w ( a, c ) − y a + y c x · w ( b, c ) − y c + y b x · w ( a, b ) − y b + y a Figure 2: “randomizing” the weights.
Step 4:
We split F p into n ρ contiguous ranges L , . . . , L n ρ , so that each range has size between ⌊ p/n ρ ⌋ and ⌈ p/n ρ ⌉ . We enumerate every triple of i, j, k such that ∈ L i + L j + L k . For every pair of i, j , the size of L i + L j is O ( p/n ρ ) . In order for ∈ L i + L j + L k , we need L k ∩ − ( L i + L j ) = ∅ . Since each L k hassize Θ( p/n ρ ) , there can be at most O (1) ranges L k that intersect with − ( L i + L j ) . Thus, the total numberof such triples is O ( n ρ ) .For each triple ( i, j, k ) , we consider a subset of edges E i,j,k defined as { e ∈ E ∩ ( A × C ) : w ′ ( e ) ∈ L i } ∪ { e ∈ E ∩ ( B × C ) : w ′ ( e ) ∈ L j } ∪ { e ∈ E ∩ ( A × B ) : w ′ ( e ) ∈ L k } . Let G i,j,k = ( A ∪ B ∪ C, E i,j,k ) be the subgraph of G ′ with edge set E i,j,k . Clearly, if graph G ′ has a zerotriangle, one of G i,j,k will have a zero triangle, and vice versa.Now we change G i,j,k so that the degree of every vertex is bounded. For each v ∈ A , if deg( v, | B | ) > | B | n − ρ + 200 or deg( v, | C | ) > | C | n − ρ + 200 , we remove the vertex v from graph G i,j,k . Wesimilarly handle vertices in parts B and C that have large degrees.Finally, we use an algorithm for ( α, β, γ, ρ, t ) -All-Edges Triangle Listing for t = 900 n c − ρ +1 on graph G i,j,k to list up to n c − ρ + 1 triangle for each edge e ∈ E i,j,k ∩ ( A × B ) . If any of these listed trianglesis a zero triangle in the original graph G , we return YES to the Zero Triangle instance. Step 5:
We repeat the previous steps
100 log n times. If no zero triangle is found in any of these
100 log n tries,we return NO to the Zero Triangle instance. Analysis
It should be clear why Step through Step works via the in-text explanations. Now we prove whyStep and Step work. Claim 3.5.
Fix any zero triangle ( a, b, c ) in G i,j,k where a ∈ A, b ∈ B, c ∈ C . With probability at least . , none of a, b, c will be removed in Step due to having a large degree.Proof. First consider deg( a, B ) . For any b ′ ∈ B \ { b } , w ′ ( a, b ′ ) = x · w ( a, b ′ ) − y b ′ + y a . Conditioned onthe fact that ( a, b ) , ( b, c ) , ( c, a ) ∈ E i,j,k , y b ′ is a completely new random variable. Therefore, Pr[( x, b ′ ) ∈ E i,j,k ] = Pr[ w ( a, b ′ ) − y b ′ + y a ∈ L k ] = | L k | p ≤ ⌈ pn − ρ ⌉ p ≤ n − ρ + p . Therefore, the expected value of deg( a, B ) can be written as E [deg( a, B )] = 1 + E X b ′ = b [( a, b ′ ) ∈ E i,j,k ] ≤ | B | n − ρ + | B | p ≤ | B | n − ρ . Pr [deg( a, B ) >
200 + 100 | B | n − ρ ] ≤ . . We can apply the same ar-gument to deg( a, C ) , deg( b, A ) , deg( b, C ) , deg( c, A ) and deg( c, B ) and take a union bound. Thus, withprobability at least . , all of these degrees will be small enough so that none of a, b, c are removed. (cid:3) We also need to show that listing n γ − ρ + 1 triangles for each edge will be enough, i.e. there are nottoo many false positives for each edge e ∈ E i,j,k ∩ ( A × B ) . Claim 3.6.
Fix any zero triangle ( a, b, c ) in G i,j,k where a ∈ A, b ∈ B , c ∈ C . The number of vertices c ′ such that1) ( a, c ′ ) , ( b, c ′ ) ∈ E i,j,k , and2) w ( a, b ) + w ( b, c ′ ) + w ( c ′ , a ) = 0 (Recall that w ( · , · ) is viewed as elements in F p , so all operationsare modulo p ).is at most n γ − ρ with probability at least . .Proof. Let c ′ be any c ′ ∈ C such that w ( a, b ) + w ( b, c ′ ) + w ( c ′ , a ) = 0 . If ( a, c ′ ) , ( b, c ′ ) ∈ E i,j,k , then w ′ ( a, c ′ ) , w ′ ( a, c ) ∈ L i and w ′ ( b, c ′ ) , w ′ ( b, c ) ∈ L j . Since L i and L j are both ranges of size at most ⌈ pn − ρ ⌉ ,we must have ( w ′ ( a, c ) − w ′ ( a, c ′ ) ∈ [ − pn − ρ , pn − ρ ] w ′ ( b, c ) − w ′ ( b, c ′ ) ∈ [ − pn − ρ , pn − ρ ] . We can expand out the definition of w ′ to get ( x · ( w ( a, c ) − w ( a, c ′ )) + ( y c − y c ′ ) ∈ [ − pn − ρ , pn − ρ ] x · ( w ( b, c ) − w ( b, c ′ )) + ( y c ′ − y c ) ∈ [ − pn − ρ , pn − ρ ] . (1)Each of the two values in Equation (1) is clearly uniformly at random. To show these two values areindependent, we consider the sum of these two values, which is x · ( w ( a, c ) + w ( b, c ) − w ( a, c ′ ) − w ( b, c ′ )) .Since w ( a, b ) + w ( b, c ′ ) + w ( c ′ , a ) = 0 , while w ( a, b ) + w ( b, c ) + w ( c, a ) = 0 , we have w ( a, c ) + w ( b, c ) = w ( a, c ′ ) + w ( b, c ′ ) . Thus, the sum of the two values in Equation (1) is the product of x with a nonzerovalue. Thus, the sum of these two values is also a uniformly random variable. Conditioned on the sum, x · ( w ( a, c ) − w ( a, c ′ )) + ( y c − y c ′ ) is also uniformly at random, since the sum does not contain the y c ′ term.Thus the two values in Equation (1) are independent. Therefore, the probability that Equation (1) happensis at most (2 n − ρ + p ) ≤ n − ρ .Summing over all c ′ ∈ C , the expected number of c ′ satisfying conditions 1) and 2) is at most n γ − ρ .Thus, the probability that the number of such c ′ exceeds n γ − ρ is at most . . (cid:3) If an edge ( a, b ) is in a zero triangle ( a, b, c ) in the original graph G , then this zero triangle is preservedin one instance G i,j,k before removing any vertices. Then we will report a triangle containing edge ( a, b ) aslong as1) we don’t remove any of a, b, c in the vertex removal process in Step (which happens with probabilityat least . by Claim 3.5);2) the number of nonzero triangles containing ( a, b ) in G i,j,k is at most n γ − ρ (which happens withprobability at least . by Claim 3.6). 12herefore, by union bound, each iteration reports at least one zero triangle with constant probability if theoriginal graph has a zero triangle. Thus, repeating the iterations for O (log n ) time suffices. Running Time:
We split the n -node graph to n − α − β − γ unbalanced graphs in Step . For each unbalanced graph, wereduce it to n ρ instances of ( α, β, γ, ρ, n γ − ρ + 1) -All-Edges Triangle Listing. The running time of thereduction is linear (up to poly-logarithmic factors) to the total input size of all the triangle listing instances.Thus, the running time is ˜ O ( n − α − β − γ +2 ρ · ( n α + β − ρ + n β + γ − ρ + n α + γ − ρ )) = ˜ O ( n − min { α,β,γ } + ρ ) . Proof of the reduction from Zero Triangle to Triangle Listing
The reduction is largely the same as the previous reduction. The only difference in the reduction isthat now in Step 4, we use the ( α, β, γ, ρ, T ) Triangle Listing algorithm on graph G i,j,k to list up to T =8100 n α + β + γ − ρ + 1 triangles, and test whether any of these triangles is a zero triangle. The correctnessanalysis of this reduction requires a more careful analysis of the expected number of triangles in G i,j,k . Claim 3.7.
Fix any triple ( i, j, k ) so that G i,j,k contains at least one zero triangle with respect to weight w ( · , · ) . With probability at least . , the number of triangles in G i,j,k that are not zero triangles in theoriginal graph G is at most n α + β + γ − ρ .Proof. Let ( a, b, c ) ∈ G i,j,k be one arbitrary zero triangle in G i,j,k . We analyze the probability that eachnonzero triangle ( a ′ , b ′ , c ′ ) ∈ G belongs to G i,j,k . In Claim 3.6, we already analyzed the case for triangleswith a = a ′ and b = b ′ , in which case the expected number of ( a ′ , b ′ , c ′ ) inside G i,j,k is at most n γ − ρ .We can similarly bound the expected number of triangles ( a ′ , b ′ , c ′ ) in G i,j,k that share two vertices withtriangle ( a, b, c ) . The expected number of such triangles can be shown to be at most n α + n β + n γ ) n − ρ .Thus, it suffices to analyze the remaining cases when ( a, b, c ) and ( a ′ , b ′ , c ′ ) share one or zero commonvertices. First, consider triangles ( a ′ , b ′ , c ′ ) that share one vertex with ( a, b, c ) . Without loss of generality,assume such triangles have the form ( a, b ′ , c ′ ) for some b = b ′ , c = c ′ . In order for ( a, b, c ) and ( a, b ′ , c ′ ) happen to be in the same edge set E i,j,k , we must have x · ( w ( a, b ) − w ( a, b ′ )) − y b + y b ′ ∈ [ − pn − ρ , pn − ρ ] x · ( w ( a, c ) − w ( a, c ′ )) + y c − y c ′ ∈ [ − pn − ρ , pn − ρ ] x · ( w ( b, c ) − w ( b ′ , c ′ )) − y c + y b + y c ′ − y b ′ ∈ [ − pn − ρ , pn − ρ ] . Let X a,b , X a,c , X b,c be the random variables denoting the three expressions in the above condition respec-tively. We will show that X a,b , X a,c , X b,c are independent. First, we analyze the sum of these three randomvariables. Consider X a,b + X a,c + X b,c , which equals x · ( w ( a, b ) + w ( a, c ) + w ( b, c ) − w ( a, b ′ ) − w ( a, c ′ ) − w ( b ′ , c ′ )) . Note that ( a, b ′ , c ′ ) is not a zero triangle, while ( a, b, c ) is. Thus, the sum is the product of x anda nonzero value, so the result is a uniformly random value. X a,b has an additive term y b ′ that is independentof X a,b + X a,c + X b,c , so X a,b is independent of X a,b + X a,c + X b,c . Similarly, X a,c has an additive term y c ′ , which is independent of ( X a,b + X a,c + X b,c , X a,b ) . Thus, X a,b + X a,c + X b,c , X a,b and X a,c areindependent, which implies X a,b , X a,c and X b,c are independent.The probability that X a,b ∈ [ − pn − ρ , pn − ρ ] is at most pn − ρ +1 p ≤ n − ρ . Similarly, the probability that X a,c , X b,c ∈ [ − pn − ρ , pn − ρ ] are both at most n − ρ + 1 /p ≤ n − ρ . Since these three random variablesare independent, the probability that all of the three are in [ − pn − ρ , pn − ρ ] is at most n − ρ . This meansthat ( a, b, c ) and ( a, b ′ , c ′ ) are in the same edge set E i,j,k with probability at most n − ρ . More generally,13f ( a ′ , b ′ , c ′ ) share exactly one common vertex with ( a, b, c ) , it will be in E i,j,k with probability at most n − ρ .Triangles that share zero vertices with ( a, b, c ) can be analyzed similarly, and each of them is in E i,j,k with probability at most n − ρ as well.Thus, the expected number of nonzero triangles in G i,j,k is at most · n α + β + γ − ρ + 9( n α + n β + n γ ) n − ρ . By Markov’s inequality, with probability at least . , the number of nonzero triangles in G i,j,k isat most n α + β + γ − ρ +900( n α + n β + n γ ) n − ρ . Since ρ < min { α, β, γ } , we have n α − ρ , n β − ρ , n γ − ρ ≤ n α + β + γ − ρ . Therefore, n α + β + γ − ρ + 900( n α + n β + n γ ) n − ρ ≤ n α + β + γ − ρ . (cid:3) By Claim 3.7, we know that is suffices to list n α + β + γ − ρ + 1 triangles in each graph G i,j,k . (cid:3) Theorem 3.4 shows hardness for listing triangles in some special graphs. To show the hardness fordetecting triangles, we still need a reduction from triangle listing to triangle detection. Theorem in [10]is such a reduction that reduces listing O (1) triangles for each edge to detecting whether each edge is ina triangle; however, that reduction changes the structure of the graph. Specifically, it does not necessarilyreduce an ( α, β, γ, ρ, O (1)) -All-Edges Triangle Listing instance to ( α, β, γ, ρ ) -All-Edges Sparse Triangleinstances. Thus, we give a new structure-preserving reduction from triangle listing to all edge triangledetection. The reduction adapts the techniques for finding the witnesses of Boolean matrix multiplication [3,27]. Proposition 3.8.
Let α, β, γ be any positive constants and let ρ ≤ min { α, β, γ } . There exists an ˜ O (( n α + β + n β + γ + n γ + α ) n − ρ ) time randomized reduction from an ( α, β, γ, ρ, O (1)) -All-Edges Triangle Listing in-stance to ˜ O (1) instances of ( α, β, γ, ρ ) -All-Edges Sparse Triangle.Proof. The reduction proceeds in two parts. We define an intermediate problem called ( α, β, γ, ρ ) -All-Edges Unique Triangle Listing, where we are given a graph that shares the same structure as ( α, β, γ, ρ ) -All-Edges Sparse Triangle instances, and we seek an algorithm that outputs a triangle for every edge e ∈ E ∩ ( A × B ) only if there is a unique triangle containing edge e ; otherwise, the algorithm can output foredge e .In the first part, we show a reduction from ( α, β, γ, ρ, O (1)) -All-Edges Triangle Listing to ( α, β, γ, ρ ) -All-Edges Unique Triangle Listing. In the second part, we show a reduction from ( α, β, γ, ρ ) -All-EdgesUnique Triangle Listing to ( α, β, γ, ρ ) -All-Edges Sparse Triangle.Fix an ( α, β, γ, ρ, k ) -All-Edges Triangle Listing instance G for some constant k . For each integer ℓ from to c log n , we perform a stage. In each stage ℓ , we repeat the following iterations for Θ( k log n ) times. Ineach iteration, we create a new graph G ′ that contains parts A and B and a subset C ′ of part C . We obtain C ′ by independently keeping every vertex c ∈ C with probability ℓ . Then G ′ is the induced subgraph of G with vertex set A ∪ B ∪ C ′ . We run an algorithm for ( α, β, γ, ρ ) -All-Edges Unique Triangle Listing on G ′ to list at most one triangle for each edge. If the algorithm lists a triangle for edge ( a, b ) , we add this triangleto a set S ( a,b ) that contains a list of found triangles containing edge ( a, b ) . After all the rounds, we outputup to k distinct triangles from each S ( a,b ) .To show the correctness for this algorithm, we show that if the actual number of triangles containingedge ( a, b ) is ∆ for some ∆ ∈ [2 ℓ − , ℓ ] , then in stage ℓ we will list all or up to k triangles containing ( a, b ) with high probability. In every iteration, we pick a random subset C ′ ⊆ C . For every ( a, b ) , suppose | S ( a,b ) | < k and there are more triangles containing ( a, b ) that have not been found. The probability thatwe keep a unique triangle not in S ( a,b ) for edge ( a, b ) is at least (∆ − | S ( a,b ) | ) · ℓ (1 − ℓ ) ∆ − = Ω(1 /k ) .14hus, after every Θ( k log n ) iterations, we will find a new triangle for edge ( a, b ) with high probability if | S ( a,b ) | < min { k, ∆ } . Therefore, we need Θ( k log n ) rounds in total.Now we show the second part of the reduction, which reduces from ( α, β, γ, ρ ) -All-Edges UniqueTriangle Listing to ( α, β, γ, ρ ) -All-Edges Sparse Triangle. Let G be a graph on which we want to solve ( α, β, γ, ρ ) -All-Edges Unique Triangle Listing. For every ≤ i ≤ γ log n , we create a graph G i that con-tains all vertices of A and B , but only those vertices from C whose i -th bit in its binary representation is . Then we run an algorithm for ( α, β, γ, ρ ) -All-Edges Sparse Triangle on graph G i . Suppose ( a, b ) is ina unique triangle ( a, b, c ) . Then if ( a, b ) is in a triangle in G i , the i -th bit of c must be ; otherwise, the i -th bit of c must be . Therefore, we will be able to determine c after all the iterations. If ( a, b ) is not in aunique triangle, then the value c we determine might not form a triangle. In this case, we can determine that ( a, b, c ) does not form a triangle and output for edge ( a, b ) . (cid:3) Corollary 3.9.
There exists a reduction from Exact Triangle to ˜ O ( n ) instances of All-Edges Sparse Triangleof O ( n . ) edges. Thus, assuming the Exact Triangle hypothesis, there is no O ( m / − ǫ ) time algorithm forAll-Edges Sparse Triangle for ǫ > .Proof. If we set α = β = γ = 1 and ρ = 0 . in Theorem 3.4, we get an ˜ O ( n . ) time reduction fromExact Triangle to ˜ O ( n ) instances of (1 , , , . , O (1)) -All-Edges Triangle Listing. By Proposition 3.8,these instances further reduce to ˜ O ( n ) instances of (1 , , , . -All-Edges Sparse Triangle. These instancescan be solved by an algorithm for All-Edges Sparse Triangle with O ( n . ) edges.Thus, if there is an O ( m / − ǫ ) time algorithm for All-Edges Sparse Triangle, we can use it to solve ExactTriangle in ˜ O ( n . + n · ( n . ) / − ǫ ) = ˜ O ( n . + n − . ǫ ) time, breaking the Exact Triangle hypothesis. (cid:3) Corollary 3.10.
Assuming the Exact Triangle hypothesis, there is no O ( n . − ǫ ) algorithm for AE-Mono ∆ on n -node graphs for ǫ > .Proof. Combining Corollary 3.9 and Theorem 1.2, we get an ˜ O ( n . ) time reduction from Exact Triangleof size n to ˜ O ( √ n ) instances of AE-Mono ∆ of size O ( n ) . Thus, if there is an O ( n . − ǫ ) time algorithmfor AE-Mono ∆ , we can use it to solve Exact Triangle in ˜ O ( n . + √ n · n . − ǫ ) = ˜ O ( n . + n − ǫ ) time,breaking the Exact Triangle hypothesis. (cid:3) Corollary 3.11.
Let A be an algorithm for All-Edges Sparse Triangle for n -node graphs where every nodehas degree at most d = n δ for some < δ ≤ . . Assuming the Exact Triangle hypothesis, A cannot run in O ( n − ǫ d ) for ǫ > .Proof. We set α = β = γ = 1 and ρ = 1 − δ in Theorem 3.4. This gives an ˜ O ( n − δ ) time reductionfrom Exact Triangle to ˜ O ( n − δ ) instances of (1 , , , ρ, O (1)) -All-Edges Triangle Listing. Thus, each ofthese instances requires n δ − o (1) time. By Proposition 3.8, there is a reduction from (1 , , , ρ, O (1)) -All-Edges Triangle Listing to ˜ O (1) instances of (1 , , , ρ ) -All-Edges Sparse Triangle. Therefore, (1 , , , ρ ) -All-Edges Sparse Triangle – All-Edges Sparse Triangle in graphs with maximum degree n δ – also requires n δ − o (1) time. (cid:3) Corollary 3.12.
For any constant < θ < , let A be an algorithm for offline SetDisjointness where | U | = Θ( N − θ ) , |F | = Θ( N ) , each set in F has at most O ( N − θ ) elements from U , and q = Θ( N θ ) .Assuming the Exact Triangle hypothesis, A cannot run in O ( N − ǫ ) for ǫ > . roof. Set α = β = 0 . , γ = 1 − θ and ρ = 0 . − θ/ in Theorem 3.4. Thus there is a subcubic timereduction from Exact Triangle to ˜ O ( n ) instances of (0 . , . , − θ, . − θ/ , O (1)) -All-Edges TriangleListing. Thus, assuming the Exact Triangle hypothesis, (0 . , . , − θ, . − θ/ , O (1)) -All-Edges TriangleListing requires n − o (1) time. By Proposition 3.8, (0 . , . , − θ, . − θ/ -All-Edges Sparse Triangle alsorequires n − o (1) time. As realized in previous works (e.g. [14]), All-Edges Sparse Triangle can be solvedusing SetDisjointness. Using the language of the (0 . , . , − θ, . − θ/ -All-Edges Sparse Triangleproblem, we can set the universe U to the vertex set C , and set the family F to A ⊔ B . We add an element u ∈ U to S ∈ F if the corresponding vertex of S has an edge with the corresponding vertex of u . Finally,for any edge ( a, b ) ∈ A × B , we add a query for the two sets corresponding to vertices a and b . Then clearly, ( a, b ) is in a triangle if and only if their corresponding sets intersect. Setting N = √ n finishes the proof. (cid:3) Corollary 3.13.
For any constants ≤ θ < and < δ , let A be an algorithm for offline SetIntersectionwhere | U | = Θ( N δ − θ ) , |F | = Θ( √ N δ + θ ) , each set in F has at most O ( N − θ ) elements from U , q = Θ( N θ ) , and T = O ( N − δ ) . Assuming the Exact Triangle hypothesis, A cannot run in O ( N − ǫ ) for ǫ > .Proof. If δ − θ ≥ , the lower bound is trivially true because the input size is |F |· N − θ = N . . δ − . θ ≥ N . Thus, we assume δ − θ < .Set α = β = + θ δ , γ = 1 − θ δ , and ρ = δ δ in Theorem 3.4. This yields a reduction fromExact Triangle to ˜ O ( n δ δ ) instances of ( + θ δ , + θ δ , − θ δ , δ δ , O ( n − δ δ )) Triangle Listing.Assuming the Exact Triangle hypothesis, each triangle listing instance requires n δ − o (1) time to compute.Similar to the fact that SetDisjointness can be used to solve All-Edges Sparse Triangle, SetIntersection canbe used to solve Triangle Listing. Thus, setting N = n δ finishes the proof. (cid:3) AE-Mono ∆ AE-MonoEq ∆ MonoEqMonoMinEq (min , ≤ ) (max , ≤ )(min , =) (max , min) [24] ˜ O ( n (3+ ω ) / ) Figure 3: Main reductions in Section 4. Double arrows represent that the running times before and after thereduction are the same up to poly-logarithmic factors. Dashed arrows represent reductions that hold onlywhen ω > .We will use the following known facts about the exponent of rectangular matrix multiplication in thissection. 16 heorem 4.1. [13] For any k > and for any integer q ≥ , ω (1 , k, ≤ log (cid:18) (1 + k ) (1+ k ) (cid:16) q k (cid:17) k (cid:19) log q . Corollary 4.2.
For any δ > , there exists a number k ≥ such that ω (1 , k, ≤ k + δ .Proof. Let k ≥ and use q = k in Theorem 4.1. Thus, w (1 , k, ≤ (1 + k ) log( k +1)log k . Consider (1 + k ) log( k +1)log k − (1 + k ) = (1 + k ) log( k +1) − log( k )log k . Since log( k + 1) − log( k ) = R k +1 k x dx < R k +1 k k dx = k ,we must have (1 + k ) log( k +1)log k − (1 + k ) ≤ kk log k . When k is large enough, kk log k ≤ δ and thus ω (1 , k, ≤ k + δ . (cid:3) We will also use the following fact about the convexity of ω (1 , x, (see e.g. [16], [19]). Fact 4.3.
When < p ≤ k ≤ q , ω (1 , k, ≤ k − pq − p ω (1 , q,
1) + q − kq − p ω (1 , p, . Consider an AE-MonoEq ∆ instance G = ( V, E ) . We can copy the vertex set of G three times to part I, J, K , and then plant the edges of G to I × J, J × K and I × K . Thus we only need to solve AE-MonoEq ∆ on tripartite graphs. Say we want to report for every edge between parts I and J , whether it isin a monochromatic equality triangle. Depending on where the two edges with the same values are, we cansplit the problem to three cases shown in Figure 4. I JK v vv (a) I JKv v (b)
I JKvv (c)Figure 4: Three cases for AE-MonoEq ∆ . The red edges represent edges colors, and the black edges repre-sent edge values.Cases (b) and (c) are symmetric, so it suffices to only consider case (a) and case (b). The followingtheorem shows that we can solve AE-MonoEq ∆ in ˜ O ( n (3+ ω ) / ) time. Moreover, when ω > , if AE-Mono ∆ has a better algorithm, so does AE-MonoEq ∆ . However, when ω = 2 , we must have λ = 0 in the theorem, so AE-MonoEq ∆ won’t necessarily have a better algorithm given a better algorithm forAE-Mono ∆ . Theorem 4.4.
Suppose ω ≥ λ for some λ ≥ , and suppose AE-Mono ∆ has an O ( n (3+ ω ) / − ǫ ) timealgorithm, then AE-MonoEq ∆ has an ˜ O n ( ω +3) / − λ κ − ǫω +3+ λ κ − − ǫ time algorithm, where κ ≥ is a constant depending on ω and λ . roof. We first show the algorithm for AE-MonoEq ∆ on tripartite graphs I ∪ J ∪ K with values on edgesets I × K and J × K (Case (a) in Figure 4). We define a good triangle ( i, j, k ) to be a triangle where all itsthree edges have the same color, and v ( i, k ) = v ( j, k ) . For each edge e ∈ E ∩ ( I × J ) , the algorithm needsto report whether it is in a good triangle or not. kI Jv v v v v v G ( k, v ) ( k, v ) ( k, v ) I Jv v v v v v G ′ Figure 5: Transforming graph G to graph G ′ by copying each vertex k and adding edges with value v tovertex ( k, v ) .We will create a new graph G ′ such that the edges of G ′ only has colors, instead of having both colorsand values. The vertex set of G ′ will be I ∪ J ∪ ( K × V ) , where V is the set of all values in G (we willcreate the graph lazily, so we won’t actually spend time creating those vertices that end up not having anyneighbors). The edge set between I and J in G ′ is the same as the edge set between I and J in G . For eachedge ( i, k ) in E ( G ) with value v and color c , we create an edge between i and ( k, v ) in G ′ with the samecolor c . Similarly, for each edge ( j, k ) , in E ( G ) with value v and color c , we create an edge between j and ( k, v ) in G ′ with the same color c . We can see that edge ( i, j ) is in a good triangle in G if and only if itis in a monochromatic triangle in G ′ . Thus, from now on, we can focus on the All-Edges MonochromaticTriangle problem on the unbalanced graph G ′ .Let D ≥ and T ≥ n be some parameters to be fixed later. For each color c , let G ′ c be the subgraph of G ′ consisting of vertices of G ′ and all edges with color c . For each vertex ( k, v ) ∈ K ′ , if its degree in G ′ c is at most D , then we can enumerate all pairs of its neighbors and test if these three vertices form a trianglein ˜ O (1) time. If a triangle ( i, j, ( k, v )) is found during the enumeration, we record that edge ( i, j ) is in agood triangle. After the enumeration, we can remove the vertex ( k, v ) together with all edges incident to itfrom the graph G ′ c . Summing over all c and all ( k, v ) , these enumerations take O ( n D ) time. We can nowassume that, in each G ′ c , the degree of every vertex ( k, v ) ∈ K ′ is at least D .We consider two cases, based on the the remaining size of K ′ in G ′ c . As a high level description, if thesize of K ′ is at least T , we use rectangular matrix multiplication. When the size of K ′ is less than T , wecombine all instances with a small size of K ′ across every G ′ c into a single AE-Mono ∆ instance, and use theassumed O ( n (3+ ω ) / − ǫ ) time algorithm for AE-Mono ∆ . We describe and analyze these two cases in moredetails in the following. Rectangular matrix multiplication for more unbalanced colors
For a subgraph G ′ c , if K ′ has size at least T , we use rectangular matrix multiplication to determinewhether each edge e ∈ E ( G ′ c ) ∩ ( I × J ) is in a triangle. To do so, we create an integer matrix X ofdimension | I | × | K ′ | and an integer matrix Y of dimension | K ′ | × | J | . Initially all entries in X and Y arezero. If there is an edge between i ∈ I and k ′ ∈ K ′ , we set the ( i, k ′ ) -th entry of X to . Similarly, if thereis an edge between j ∈ J and k ′ ∈ K ′ , we set the ( k ′ , j ) -th entry of Y to . It is then not hard to see that18dge ( i, j ) is in a triangle if and only if ( XY ) i,j > . Since | I | , | J | ≤ n , it would take O ( n ω (1 , log n | K ′ | , ) time to multiply X and Y .In order to analyze rectangular matrix multiplication, we need some bound on ω (1 , t, in the regimewhen ω ≥ λ . Claim 4.5.
There exists a constant κ such that ω (1 , t, ≤ ω + ( t − − λ κ − ) for every ≤ t ≤ .Proof. If ω = 2 , then λ must be and thus the claim is trivially true. Now assume ω > .Using δ = ω − − λ/ in Corollary 4.2, there exists κ ≥ such that ω (1 , κ, ≤ ω + κ − − λ/ . Byconvexity of ω (1 , ∗ , , for any ≤ t ≤ κ , ω (1 , t, ≤ t − κ − ( ω + κ − − λ/
2) + κ − tκ − ω = ω + ( t − − λ κ − ) . (cid:3) Let t = log n T . If the size of K ′ in G ′ c is n q c for some q c ≥ t , then it takes O ( n ω (1 ,q c , ) time to compute XY . Clearly ≤ t ≤ q c ≤ , so n ω (1 ,q c , = O ( n ω +( q c − − λ κ − ) by Claim 4.5. Since the degree ofevery vertex in K ′ is at least D , and the total number of edges across all G ′ c is O ( n ) , so the sum of thesizes of K ′ is at most O ( n D ) . In other words, P c n q c = O ( n D ) . Therefore, the overall time complexity canbe bounded as the following up to constant factors. X c n ω +( q c − − λ κ − ) = n ω · X c n ( q c − /n ( qc − λ κ − ≤ n ω · X c n ( q c − /n ( t − λ κ − = n ω +1+ λ κ − DT λ κ − . Therefore, this case takes at most O ( n ω +1+ λ κ − DT λ κ − ) time. All-Edges Monochromatic Triangle for moderately unbalanced colors
In this part, we consider colors c such that the K ′ part in G ′ c has at most T vertices. Since T ≥ n we canassume the number of vertices in G ′ c is at most n + T ≤ T . Also, if we sum over the number of edges of G ′ c of every c , the total number of edges is at most n .Note that the graph on each G ′ c is an All-Edges Sparse Triangle instance. We can actually combine allthese instances to a single All-Edges Monochromatic Triangle instance of size ˜ O ( T ) by a similar reductionof the proof of Theorem 1.2.The reduction from All-Edges Sparse Triangle to All-Edges Monochromatic Triangle in Theorem 1.2works as follows [18]. Let H be a multi-graph on T vertices, initially with no edges. For each All-EdgesSparse Triangle instance on vertex set [3 T ] , we take a random permutation of its vertices, and copy each edgein this instance to H , and color these edges using a color unique to this instance. On expectation, for every ( u, v ) ∈ [3 T ] × [3 T ] , the multiplicity of ( u, v ) in H is n (3 T ) = O (1) . Since the permutations for graphs G ′ c are independent for different colors c , by Chernoff bound, the multiplicity of any ( u, v ) is O (log n ) with high probability. For every ( u, v ) in [3 T ] × [3 T ] , we arbitrarily label its corresponding O (log n ) edgesusing numbers from to O (log n ) . Then we enumerate O (log n ) triples ( i, j, k ) of labels. For each triple ( i, j, k ) , we create a graph on vertex set V ⊔ V ⊔ V , where V = V = V = [3 T ] . Between V and V , we add edges in H with label i ; between V and V , we add edges with label j ; between V and V ,19e add edges with label k . Then a triangle in one All-Edges Sparse Triangle instance corresponds to somemonochromatic triangle in one of these O (log n ) instances. This finishes the reduction.For each of the O (log n ) instances of All-Edges Monochromatic Triangle, we can use the assumed O ( n (3+ ω ) / − ǫ ) time algorithm for All-Edges Monochromatic Triangle, so we get an ˜ O ( T (3+ ω ) / − ǫ ) timealgorithm for this case. Running Time
The three cases have running times O ( n D ) , O ( n ω +1+ λ κ − DT λ κ − ) , and ˜ O ( T (3+ ω ) / − ǫ ) re-spectively. We can balance them by setting T = n ǫω +3+ λ κ − − ǫ and D = T (3+ ω ) / − ǫ n . The overall runningtime is ˜ O n ( ω +3) / − λ κ − ǫω +3+ λ κ − − ǫ . Now we consider the case where the values are on edge sets I × J and I × K (Case (b) in Figure 4).The case where the values are on edge sets I × J and J × K (Case (c) in Figure 4) is symmetric. Thealgorithm is largely the same with some small changes in details. Instead of creating graph G ′ with vertexset I ∪ J ∪ ( K × V ) , we create graph G ′ with vertex set ( I ′ = I × V ) ∪ J ∪ K . Similarly, now each vertexin I ′ represents both a vertex i ∈ I and a value. The edges in G ′ are added similarly to the algorithm forCase (a).Now it is sufficient to compute whether each edge in I ′ × J is in a monochromatic triangle in G ′ .Similarly, we partition the edge set by the color of the edges to get instances G ′ c . For those vertices i ′ ∈ I ′ that has degree at most D in some G ′ c , we could enumerate all pairs of neighbors of i ′ and test if any pairforms a triangle in O (deg G ′ c ( i ′ )) time. Overall, this case takes O ( n D ) time.For the remaining vertices, we similarly consider the size of I ′ . If | I ′ | = n p c ≥ T , we use rectangularmatrix multiplication. In this case, we will compute the product of an n p c by n matrix, and an n by n matrix,so it takes O ( n ω ( p c , , ) time. Note that ω ( p c , ,
1) = ω (1 , p c , , so this case runs in the same time as thecorresponding case in algorithm for Case (a).Finally, for those G ′ c whose I ′ has size at most T , we can still combine them into ˜ O (1) All-EdgesMonochromatic Triangle instances. Thus, the running time is still the same as the running time for thecorresponding case in algorithm for Case (a).Since all three cases share the same running times, the overall running time remains the same. (cid:3)
Next, we show reductions to AE-MonoEq ∆ from many other intermediate problems. Theorem 4.6.
If there is a T ( n ) time algorithm for AE-MonoEq ∆ , then there is an O ( T ( n ) log n ) timealgorithm for (min , =) -product.Proof. We can add a column to matrix A with an entry value that’s larger than all other entries, and add thecorresponding row to matrix B with the same value. This value won’t affect C i,j when { B k,j | A i,k = B k,j } k is nonempty. If the computed C i,j equals this large value, we know { B k,j | A i,k = B k,j } k is empty, and canthen set C i,j back to ∞ . Thus, we can assume C i,j is finite for every i, j .We can easily discretize the entries of A and B , so that we can assume all entries are integers between and n − .We create a complete tripartite graph G with vertex set I, J, K . For each edge ( i, k ) ∈ I × K , we assignit a value A i,k ; for each edge ( j, k ) ∈ J × K , we assign it a value B k,j . We will use colors on the edges toperform parallel binary search to find the (min , =) -product.20et t = ⌈ log (2 n ) ⌉ . Initially, we know that C i,j ∈ [0 , t ) for every i, j . We will perform t calls to the T ( n ) time algorithm for AE-MonoEq ∆ , and each call narrows the possible range of C i,j by a half. For each ≤ ℓ ≤ t , we will compute ˜ C ℓi,j such that ˜ C ℓi,j is a multiple of ℓ and ˜ C ℓi,j ≤ C i,j < ˜ C ℓi,j + 2 ℓ . The initialcondition, when ℓ = t , is clearly satisfied by setting ˜ C ti,j = 0 .Assume we have have computed ˜ C ℓ +1 . We will recolor the edges in G for computing ˜ C ℓ . We set thecolor of an edge ( i, k ) ∈ I × K as ⌊ A i,k ℓ ⌋ , and set the color of an edge ( j, k ) ∈ J × K as ⌊ B k,j ℓ ⌋ . Foreach edge ( i, j ) , we set its color to ⌊ ˜ C ℓ +1 i,j ℓ ⌋ . Now we use the T ( n ) time algorithm for AE-MonoEq ∆ on thisgraph G . If edge ( i, j ) is in a monochromatic equality triangle, then we set ˜ C ℓ = ˜ C ℓ +1 ; otherwise, we set ˜ C ℓ = ˜ C ℓ +1 + 2 ℓ .We show that the values ˜ C ℓ satisfy the conditions, assuming ˜ C ℓ +1 satisfies the conditions. First, since ˜ C ℓ +1 i,j is a multiple of ℓ +1 , ˜ C ℓi,j will be a multiple of ℓ in either case. Suppose an edge ( i, j ) is in amonochromatic equality triangle ( i, k, j ) . Since the values on edges ( i, k ) and ( k, j ) are the same, we have A i,k = B k,j . Also, the colors on edges ( i, k ) , ( k, j ) , ( i, j ) are the same, so ⌊ A i,k ℓ ⌋ = ⌊ B k,j ℓ ⌋ = ⌊ ˜ C ℓ +1 i,k ℓ ⌋ . Since ˜ C ℓ +1 is a multiple of ℓ +1 , we further get that ˜ C ℓ +1 i,j ≤ A i,k = B k,j < ˜ C ℓ +1 i,j + 2 ℓ . We can similarly showthat if there exists k such that ˜ C ℓ +1 i,j ≤ A i,k = B k,j < ˜ C ℓ +1 i,j + 2 ℓ , then ( i, j ) is on a monochromatic equalitytriangle. Thus, ( i, j ) is on a monochromatic equality triangle if and only if ˜ C ℓ +1 i,j ≤ C i,j < ˜ C ℓ +1 i,j + 2 ℓ .Therefore, if ( i, j ) is on a monochromatic equality triangle, setting ˜ C ℓi,j = ˜ C ℓ +1 i,j satisfies ˜ C ℓi,j ≤ C i,j < ˜ C ℓi,j +2 t − ℓ ; otherwise, since ˜ C ℓ +1 i,j ≤ C i,j < ˜ C ℓ +1 i,j +2 ℓ +1 , we must have ˜ C ℓ +1 i,j +2 ℓ ≤ C i,j < ˜ C ℓ +1 i,j +2 ℓ +2 ℓ ,so setting ˜ C ℓi,j = ˜ C ℓ +1 i,j + 2 ℓ satisfies the conditions.Now if we performed all t rounds, we would get ˜ C ti,j such that ˜ C ti,j ≤ C i,j < ˜ C ti,j + 2 t − t , so ˜ C ti,j = C i,j . (cid:3) Theorem 4.7.
If there is a T ( n ) time algorithm for AE-MonoEq ∆ , then there is an O ( T ( n ) log n ) timealgorithm for (min , ≤ ) -product.Proof. First, we can add to every entry of B , so now the problem becomes a (min , < ) -product. Also, wecan easily discretize the entries of A and B , so that we can assume all entries are integers between and n − .For every A i,k < B k,j , there is some integer ℓ such that the bit corresponding to ℓ − in A i,k ’s binaryrepresentation is , the bit corresponding to ℓ − in B k,j ’s binary representation is , and ⌊ A i,k ℓ ⌋ = ⌊ B k,j ℓ ⌋ .Our algorithm enumerates this ℓ , and handles different ℓ independently.Fix some ≤ ℓ ≤ ⌈ log (2 n ) ⌉ . We aim to compute C ℓ , where C ℓi,j is defined as C ℓi,j = min k { B k,j | A i,k < B k,j ∧ the highest differing bit of A i,k and B k,j is the bit corresponding to ℓ − } . We create new matrices ˜ A ℓ and ˜ B ℓ . For some i, k , if the bit corresponding to ℓ − in the binary repre-sentation of A i,k is , we set ˜ A ℓi,k to ⌊ A i,k ℓ ⌋ ; otherwise, we set ˜ A ℓi,k to − . Similarly, For some k, j , if thebit corresponding to ℓ − in the binary representation of B k,j is , we set ˜ B ℓk,j to ⌊ B k,j ℓ ⌋ ; otherwise, we set ˜ B ℓk,j to − . Then we use Theorem 4.6 to compute the (min , =) -product ˜ C ℓ of ˜ A ℓ and ˜ B ℓ in O ( T ( n ) log n ) time. If ˜ C ℓi,j < ∞ , then clearly ˜ C ℓi,j · ℓ ≤ C ℓi,j < ˜ C ℓi,j · ℓ + 2 ℓ ; if ˜ C ℓi,j = ∞ , then C ℓi,j is also ∞ .For every i, j , it suffices to find min k { B k,j | ˜ A ℓi,k = ˜ B ℓk,j = ˜ C ℓi,j } . We can use the parallel binary searchidea again. Create a complete tripartite graph G with vertex set I ∪ J ∪ K . For edge ( i, j ) ∈ I × J , we use21 C ℓi,j as its color; for edge ( i, k ) ∈ I × K , we use ˜ A ℓi,k as its color; for edge ( k, j ) ∈ K × J , we use ˜ B ℓk,j asits color. The values of the graph will be on edge set ( I × J ) ∪ ( J × K ) .For every r ≤ ℓ , we will compute an estimate ˜ C ℓ,ri,j such that ˜ C ℓ,ri,j is a multiple of r and ˜ C ℓ,ri,j ≤ C ℓi,j < ˜ C ℓ,ri,j + 2 r . When r = ℓ , we can clearly set ˜ C ℓ,ri,j = ˜ C ℓi,j · ℓ . Now suppose we have computed ˜ C ℓ,r +1 andwant to compute ˜ C ℓ,r . On the graph G , we set the value of edge ( i, j ) ∈ I × J to be ⌊ ˜ C ℓ,r +1 r ⌋ , and set thevalue of edge ( j, k ) ∈ J × K to be ⌊ B k,j r ⌋ . Then we use the T ( n ) time AE-MonoEq ∆ algorithm on graph G . For every ( i, j ) , if it is on a monochromatic equality triangle, we set ˜ C ℓ,ri,j = ˜ C ℓ,r +1 i,j ; otherwise, we set ˜ C ℓ,ri,j = ˜ C ℓ,r +1 i,j + 2 r .Clearly, ˜ C ℓ,ri,j is a multiple of r , since ˜ C ℓ,r +1 i,j is a multiple of r +1 . If ( i, j ) is on a monochromaticequality triangle, then there exists k such that ⌊ B k,j r ⌋ = ⌊ ˜ C ℓ,r +1 r ⌋ and ˜ A ℓi,k = ˜ B ℓk,j = ˜ C ℓi,j . Also because ˜ C ℓ,r +1 is a multiple of r +1 , we must have ˜ C ℓ,r +1 ≤ B k,j < ˜ C ℓ,r +1 + 2 r . Since C ℓi,j ≥ ˜ C ℓ,r +1 , we musthave ˜ C ℓ,r +1 ≤ C ℓi,j < ˜ C ℓ,r +1 + 2 r . Thus, it is valid to set ˜ C ℓ,ri,j = ˜ C ℓ,r +1 i,j in this case.If ( i, j ) is not on a monochromatic equality triangle, then we can similarly show that the best B k,j where ˜ A ℓi,k = ˜ B ℓk,j = ˜ C ℓi,j must be at least ˜ C ℓ,r +1 + 2 r . Also, by the guarantee of ˜ C ℓ,r +1 i,j , the best B k,j must be smaller than ˜ C ℓ,r +1 i,j + 2 r +1 . Thus, it is valid to set ˜ C ℓ,ri,j = ˜ C ℓ,r +1 i,j + 2 r in this case since ˜ C ℓ,r +1 i,j + 2 r ≤ C ℓi,j < ˜ C ℓ,r +1 i,j + 2 r + 2 r .After we compute ˜ C ℓ,r for all ≤ r ≤ ℓ , we can simply set C ℓ = ˜ C ℓ, , since the guarantee of ˜ C ℓ, is ˜ C ℓ, i,j ≤ ˜ C ℓi,j < ˜ C ℓ, i,j + 2 .After we compute C ℓ for every ℓ , we can compute C i,j = min ℓ C ℓi,j . The overall time complexity is O ( T ( n ) log n ) since the number of bit ℓ is O (log n ) , and computing C ℓ for each ℓ takes O ( T ( n ) log n ) time. (cid:3) Using a similar proof, we can get a reduction to AE-MonoEq ∆ from (max , ≤ ) product. Even though (max , ≤ ) -product looks similar to (min , ≤ ) -product, and their best algorithms both run in ˜ O ( n (3+ ω ) ) time[9], we don’t know if they are equivalent. Proposition 4.8.
If there is a T ( n ) time algorithm for AE-MonoEq ∆ , then there is an O ( T ( n ) log n ) timealgorithm for (max , ≤ ) -product. Now we consider the Monochromatic Equality Product problem, which can be viewed as Case (a) ofAE-MonoEq ∆ .Note that the proof of Theorem 4.6 only uses Case (a) of AE-MonoEq ∆ , so the same proof actuallyshows a reduction from (min , =) -product to Monochromatic Equality Product. In fact, we will show thatMonochromatic Equality Product is equivalent to Monochromatic (min , =) -product which is a strongerversion of (min , =) -product.The best algorithm for (min , =) -product runs in ˜ O ( n (3+ ω ) / ) = ˜ O ( n . ) time, while the best algo-rithm for Equality Product runs in ˜ O ( n . ) time [12], where the improvement is brought by rectangularmatrix multiplication. Therefore, we don’t know if Min Equality product is equivalent to Equality Product.The following theorem suggests that if we add the Monochromatic constraint to both problems, they becomeequivalent up to poly-logarithmic factors. Theorem 4.9.
If there is a T ( n ) time algorithm for Monochromatic (min , =) -product then there is an O ( T ( n )) time algorithm for Monochromatic Equality Product. Also, if there is a T ( n ) time algorithmfor Monochromatic Equality Product, then there is an O ( T ( n ) log n ) time algorithm for Monochromatic (min , =) -product. roof. The first direction is trivially true, since Monochromatic (min , =) -product computes more informa-tion than Monochromatic Equality Product. The second direction is more interesting.Let A be an algorithm for Monochromatic Equality Product. Suppose we have an instance of Monochro-matic (min , =) -product of n × n matrices, with vertex sets I, J, K , edge colors c ( · , · ) , and edge values A i,k for ( i, k ) ∈ I × K and edge values B k,j for ( k, j ) ∈ K × J . Clearly, we can discretize all the values sothat they are integers between [0 , n ) . Let C i,j be the minimum value of A i,k such that A i,k = B k,j and c ( i, k ) = c ( k, j ) = c ( i, j ) . Using A , we can easily decide whether C i,j is ∞ for all pairs of ( i, j ) . Thus, wecan focus on determining values for the finite entries of C in the following.We use the parallel binary search idea from before. Let t = ⌈ log(2 n ) ⌉ . For each integer ≤ ℓ ≤ t , weaim to compute ˜ C ℓi,j so that ˜ C ℓi,j is a multiple of ℓ , and ˜ C ℓi,j ≤ C i,j < ˜ C ℓi,j + 2 ℓ . ˜ C ti,j is easy to compute,since we can just set ˜ C ti,j = 0 .Suppose for some ≤ ℓ < t , we have computed ˜ C ℓ +1 , we will compute ˜ C ℓ . In order to performthe binary search, we only need to know for each pair of i, j , whether there exists k such that c ( i, k ) = c ( k, j ) = c ( i, j ) , A i,k = B k,j and ⌊ A i,k / ℓ ⌋ = ⌊ B k,j / ℓ ⌋ = ⌊ ˜ C ℓ +1 i,k / ℓ ⌋ . If there exists one, then we canset ˜ C ℓi,j = ˜ C ℓ +1 i,j ; otherwise, we set ˜ C ℓi,j = ˜ C ℓ +1 i,j + 2 ℓ .To determine the existence of such k , we use the algorithm for Monochromatic Equality Product. Wecan create a Monochromatic Equality Product instance on the same vertex set. For each edge ( i, j ) ∈ I × J ,we set its color to ( c ( i, j ) , ⌊ ˜ C ℓ +1 i,k / ℓ ⌋ ) ; for ( i, k ) ∈ I × K , we set its color to ( c ( i, k ) , ⌊ A i,k / ℓ ⌋ ) ; for ( j, k ) ∈ J × K , we set its color to ( c ( j, k ) , ⌊ B k,j / ℓ ⌋ ) . The values of the Monochromatic Equality Productis the same as the values of the original instance. Thus, if we use the T ( n ) time algorithm A on thisMonochromatic Equality Product instance, we would be able to compute ˜ C ℓ , and thus continue the binarysearch.After we compute ˜ C , we can easily set C i,j = ˜ C i,j if C i,j < ∞ . Therefore, the algorithm runs in O ( T ( n ) log n ) time. (cid:3) We further consider Monochromatic (min , ≤ ) -product, which turns out also has an ˜ O ( n (3+ ω ) / ) timealgorithm. Proposition 4.10.
If there is a T ( n ) time algorithm for AE-MonoEq ∆ , then there is an O ( T ( n ) log n ) timealgorithm for Monochromatic (min , ≤ ) Product.
The proof for Proposition 4.10 is essentially a combination of the proof of Theorem 4.7 and the ideasused in the proof of Theorem 4.9, so we won’t describe it in full detail for conciseness. For a high level de-scription, we first reduce this problem to Monochromatic (min , < ) Product with entries in { , . . . , n − } .Then we enumerate the first differing bit in the binary representation of A i,k and B k,j , and use Theorem 4.9to find the smallest common binary prefix of A i,k and B k,j . After we have this common prefix, we use it,together with the original color of the graph, as the color for a new graph. Also, we use edge values on I × J and J × K for performing parallel binary search, similar to what described in the proof of Theorem 4.7.23 Conclusion
AE-Mono ∆ AE-MonoEq ∆ MonoEqMonoMinEq (min , ≤ ) (max , ≤ )(min , =) (max , min) [24] ˜ O ( n (3+ ω ) / ) O ( n ) APSP O ( n ) Exact ∆ Zero ∆ [ m / ] × AE-Sparse ∆ AE-Sparse ∆ SetIntersectionSetDisjointness O ( n ) [33][31] m / trivial m / [18] n / Figure 6: Main reductions in this paper. Single arrows represent normal fine-grained reductions. Doublearrows represent that the running times before and after the reduction are the same up to poly-logarithmicfactors. Dashed arrows represent reductions that hold only when ω > . References [1] Amir Abboud, Holger Dell, Karl Bringmann, and Jesper Nederlof. More consequences of falsifyingseth and the orthogonal vectors conjecture. In , pages 445–456. Association for Computing Machinery, Inc, 2018.[2] Amir Abboud and Virginia Vassilevska Williams. Popular conjectures imply strong lower bounds fordynamic problems. In , pages434–443. IEEE, 2014.[3] Noga Alon, Zvi Galil, Oded Margalit, and Moni Naor. Witnesses for boolean matrix multiplicationand for shortest paths. In , pages 417–426. IEEE Computer Society, 1992.[4] Noga Alon, Raphael Yuster, and Uri Zwick. Finding and counting given length cycles.
Algorithmica ,17(3):209–223, 1997.[5] Andreas Bj¨orklund, Rasmus Pagh, Virginia Vassilevska Williams, and Uri Zwick. Listing triangles.In
International Colloquium on Automata, Languages, and Programming , pages 223–234. Springer,2014. 246] Marek Cygan, Marcin Mucha, Karol Wegrzycki, and Michal Wlodarczyk. On problems equivalent to(min, +)-convolution.
ACM Trans. Algorithms , 15(1):14:1–14:25, 2019.[7] Artur Czumaj, Miroslaw Kowaluk, and Andrzej Lingas. Faster algorithms for finding lowest commonancestors in directed acyclic graphs.
Theor. Comput. Sci. , 380(1-2):37–46, 2007.[8] Ran Duan, Ce Jin, and Hongxun Wu. Faster algorithms for all pairs non-decreasing paths problem. In , volume 132 of
LIPIcs , pages 48:1–48:13. Schloss Dagstuhl - Leibniz-Zentrumf¨ur Informatik, 2019.[9] Ran Duan and Seth Pettie. Fast algorithms for (max, min)-matrix multiplication and bottleneck shortestpaths. In
Proceedings of the twentieth annual ACM-SIAM symposium on Discrete algorithms , pages384–391. SIAM, 2009.[10] Lech Duraj, Krzysztof Kleiner, Adam Polak, and Virginia Vassilevska Williams. Equivalences betweentriangle and range query problems. In
Proceedings of the Fourteenth Annual ACM-SIAM Symposiumon Discrete Algorithms , pages 30–47. SIAM, 2020.[11] Franc¸ois Le Gall. Powers of tensors and fast matrix multiplication. In Katsusuke Nabeshima, KosakuNagasaka, Franz Winkler, and ´Agnes Sz´ant´o, editors,
International Symposium on Symbolic and Alge-braic Computation, ISSAC ’14, Kobe, Japan, July 23-25, 2014 , pages 296–303. ACM, 2014.[12] Omer Gold and Micha Sharir. Dominance product and high-dimensional closest pair under l infty.In . Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2017.[13] Xiaohan Huang and Victor Y Pan. Fast rectangular matrix multiplication and applications.
Journal ofcomplexity , 14(2):257–299, 1998.[14] Tsvi Kopelowitz, Seth Pettie, and Ely Porat. Higher lower bounds from the 3sum conjecture. In
Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms , pages 1272–1287. SIAM, 2016.[15] Karim Labib, Przemysław Uzna´nski, and Daniel Wolleb-Graf. Hamming distance completeness.
Leib-niz International Proceedings in Informatics, LIPIcs , 128, 2019.[16] Franc¸ois Le Gall. Faster algorithms for rectangular matrix multiplication. In , pages 514–523. IEEE, 2012.[17] Franc¸ois Le Gall and Florent Urrutia. Improved rectangular matrix multiplication using powers ofthe coppersmith-winograd tensor. In
Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposiumon Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10, 2018 , pages 1029–1046,2018.[18] Andrea Lincoln, Adam Polak, and Virginia Vassilevska Williams. Monochromatic triangles, inter-mediate matrix products, and convolutions. In . Schloss Dagstuhl-Leibniz-Zentrum f¨ur Informatik, 2020.[19] Grazia Lotti and Francesco Romani. On the asymptotic complexity of rectangular matrix multiplica-tion.
Theoretical Computer Science , 23(2):171–185, 1983.2520] Jiˇr´ı Matouˇsek. Computing dominances in Eˆn.
Inf. Process. Lett. , 38(5):277–278, 1991.[21] Mihai P˘atras¸cu. Towards polynomial lower bounds for dynamic problems. In
Proceedings of theforty-second ACM symposium on Theory of computing , pages 603–610, 2010.[22] Virginia Vassilevska and Ryan Williams. Finding, minimizing, and counting weighted subgraphs. InMichael Mitzenmacher, editor,
Proceedings of the 41st Annual ACM Symposium on Theory of Com-puting, STOC 2009, Bethesda, MD, USA, May 31 - June 2, 2009 , pages 455–464. ACM, 2009.[23] Virginia Vassilevska, Ryan Williams, and Raphael Yuster. Finding the smallest H -subgraph in realweighted graphs and related problems. In Michele Bugliesi, Bart Preneel, Vladimiro Sassone, and IngoWegener, editors, Automata, Languages and Programming, 33rd International Colloquium, ICALP2006, Venice, Italy, July 10-14, 2006, Proceedings, Part I , volume 4051 of
Lecture Notes in ComputerScience , pages 262–273, 2006.[24] Virginia Vassilevska, Ryan Williams, and Raphael Yuster. All-pairs bottleneck paths for general graphsin truly sub-cubic time. In David S. Johnson and Uriel Feige, editors,
Proceedings of the 39th AnnualACM Symposium on Theory of Computing, San Diego, California, USA, June 11-13, 2007 , pages585–589. ACM, 2007.[25] Virginia Vassilevska, Ryan Williams, and Raphael Yuster. Finding heaviest H -subgraphs in realweighted graphs, with applications. ACM Trans. Algorithms , 6(3):44:1–44:23, 2010.[26] Virginia Vassilevska Williams. Multiplying matrices faster than Coppersmith-Winograd. In Howard J.Karloff and Toniann Pitassi, editors,
Proceedings of the 44th Symposium on Theory of ComputingConference, STOC 2012, New York, NY, USA, May 19 - 22, 2012 , pages 887–898. ACM, 2012.[27] Virginia Vassilevska Williams. Lecture nodes for lecture 8 of CS367, October 15, 2015, 2015.[28] Virginia Vassilevska Williams. Problem 2 on problem set 2 of CS367, October 15, 2015, 2015.[29] R. Ryan Williams. Faster all-pairs shortest paths via circuit complexity.
SIAM J. Comput. , 47(5):1965–1985, 2018.[30] Ryan Williams. Faster all-pairs shortest paths via circuit complexity. In David B. Shmoys, editor,
Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014 , pages664–673. ACM, 2014.[31] Virginia Vassilevska Williams and R. Ryan Williams. Subcubic equivalences between path, matrix,and triangle problems.
J. ACM , 65(5):27:1–27:38, 2018.[32] Virginia Vassilevska Williams and Ryan Williams. Subcubic equivalences between path, matrix andtriangle problems. In , pages 645–654. IEEE Computer Society, 2010.[33] Virginia Vassilevska Williams and Ryan Williams. Finding, minimizing, and counting weighted sub-graphs.
SIAM J. Comput. , 42(3):831–854, 2013.[34] Raphael Yuster. Efficient algorithms on sets of permutations, dominance, and real-weighted APSP.In Claire Mathieu, editor,
Proceedings of the Twentieth Annual ACM-SIAM Symposium on DiscreteAlgorithms, SODA 2009, New York, NY, USA, January 4-6, 2009 , pages 950–957. SIAM, 2009.2635] Uri Zwick. All pairs shortest paths using bridging sets and rectangular matrix multiplication.