Algorithms, Reductions and Equivalences for Small Weight Variants of All-Pairs Shortest Paths
aa r X i v : . [ c s . D S ] F e b Algorithms, Reductions and Equivalences for Small WeightVariants of All-Pairs Shortest Paths
Timothy M. [email protected] Virginia Vassilevska [email protected] Yinzhan [email protected]
Abstract
All-Pairs Shortest Paths (APSP) is one of the most well studied problems in graph algorithms. Thispaper studies several variants of APSP in unweighted graphs or graphs with small integer weights.APSP with small integer weights in undirected graphs [Seidel’95, Galil and Margalit’97] has an ˜ O ( n ω ) time algorithm, where ω < . is the matrix multiplication exponent. APSP in directedgraphs with small weights however, has a much slower running time that would be Ω( n . ) even if ω = 2 [Zwick’02]. To understand this n . bottleneck, we build a web of reductions around directedunweighted APSP. We show that it is fine-grained equivalent to computing a rectangular Min-Plusproduct for matrices with integer entries; the dimensions and entry size of the matrices depend on thevalue of ω . As a consequence, we establish an equivalence between APSP in directed unweightedgraphs, APSP in directed graphs with small ( ˜ O (1)) integer weights, All-Pairs Longest Paths in DAGswith small weights, c Red-APSP in undirected graphs with small weights, for any c ≥ (computingall-pairs shortest path distances among paths that use at most c red edges), ≤ c APSP in directed graphswith small weights (counting the number of shortest paths for each vertex pair, up to c ), and approximateAPSP with additive error c in directed graphs with small weights, for c ≤ ˜ O (1) .We also provide fine-grained reductions from directed unweighted APSP to All-Pairs Shortest Light-est Paths (APSLP) in undirected graphs with { , } weights and mod c APSP in directed unweightedgraphs (computing counts mod c ), thus showing that unless the current algorithms for APSP in directedunweighted graphs can be improved substantially, these problems need at least Ω( n . ) time.We complement our hardness results with new algorithms. We improve the known algorithms forAPSLP in directed graphs with small integer weights (previously studied by Zwick [STOC’99]) and forapproximate APSP with sublinear additive error in directed unweighted graphs (previously studied byRoditty and Shapira [ICALP’08]). Our algorithm for approximate APSP with sublinear additive erroris optimal, when viewed as a reduction to Min-Plus product. We also give new algorithms for variantsof ≤ U APSP and mod U APSP for U ≤ n ˜ O (1) ) in unweighted graphs, as well as anear-optimal ˜ O ( n ) -time algorithm for the original ˜ O ( n ) -time algorithm for Betweenness Centrality,improving on the previous ˜ O ( n ) running time for the problem. Our techniques also lead to a simpleralternative to Shoshan and Zwick’s algorithm [FOCS’99] for the original APSP problem in undirectedgraphs with small integer weights. All-Pairs Shortest Paths (APSP) is one of the oldest and most studied problems in graph algorithms. Thefastest known algorithm for general n -node graphs runs in n / Θ( √ log n ) [33]. In unweighted graphs, orgraphs with small integer weights, faster algorithms are known.1or APSP in undirected unweighted graphs (u-APSP), Seidel [23] and Galil and Margalit [13, 14] gave ˜ O ( n ω ) time algorithms where ω ≤ . is the exponent of matrix multiplication [2, 17, 29]; the latteralgorithm works for graphs with small integer weights in [ ± c ] for c = ˜ O (1) . The hidden dependence on c was improved by Shoshan and Zwick [24].For directed unweighted graphs or graphs with weights in [ ± c ] , the fastest APSP algorithm is by Zwick[36], running in O ( n . ) time. This running time is achieved using the best known bounds for rectangularmatrix multiplication [18] and would be Ω( n . ) even if ω = 2 .There is a big discrepancy between the running times for undirected and directed APSP. One mightwonder, why is this? Are directed graphs inherently more difficult for APSP, or is there some special graphstructure we can uncover and then use it to develop an ˜ O ( n ω ) time algorithm for directed APSP as well?(Note that matrix multiplication seems necessary for APSP since APSP is known to capture Boolean matrixmultiplication.)The first contribution in this paper is a fine-grained equivalence between directed unweighted APSP(u-APSP) and a certain rectangular version of the Min-Plus product problem.The Min-Plus product of an n × m matrix A by an m × p matrix B is the matrix C with entries C [ i, j ] = min mk =1 ( A [ i, k ] + B [ k, j ]) . Let us denote by M ⋆ ( n , n , n | M ) the problem of computing theMin-Plus product of an n × n matrix by an n × n matrix where both matrices have integer entries in [ M ] . Let M ⋆ ( n , n , n | M ) be the best running time for M ⋆ ( n , n , n | M ) .Zwick’s algorithm [36] for u-APSP can be viewed as making a logarithmic number of calls to the Min-Plus product M ⋆ ( n, n/L, n | L ) for all ≤ L ≤ n that are powers of / . The running time of Zwick’salgorithm is thus, within polylogarithmic factors, max L M ⋆ ( n, n/L, n | L ) .Let M ( a, b, c ) denote the running time of the fastest algorithm to multiply an a × b by a b × c matrixover the integers. Let ω ( a, b, c ) be the smallest real number r such that M ( n a , n b , n c ) ≤ O ( n r + ε ) for all ε > .The best known upper bound for the Min-Plus product running time M ⋆ ( n, n/L, n | L ) is the minimumof O ( n /L ) (the brute force algorithm) and ˜ O ( L · M ( n, n/L, n )) [3]. For L = n − ℓ , M ⋆ ( n, n/L, n | L ) isthus at most ˜ O (min { n ℓ , n − ℓ + ω (1 ,ℓ, } ) . Over all ℓ ∈ [0 , , the runtime is maximized at ˜ O ( n ρ ) where ρ is such that ω (1 , ρ,
1) = 1 + 2 ρ .Hence in particular, the running time of Zwick’s algorithm is ˜ O ( n ρ ) . This running time has remainedunchanged (except for improvements on the bounds on ρ ) for almost 20 years. The current best knownbound on ρ is ρ < . , and if ω = 2 , then ρ = 1 / .Our first result is that u-APSP is sub- n ρ fine-grained equivalent to M ⋆ ( n, n ρ , n | n − ρ ) : Theorem 1.1. If M ⋆ ( n, n ρ , n | n − ρ ) is in O ( n ρ − ε ) time for ε > , then u-APSP can also be solved in O ( n ρ − ε ′ ) time for some ε ′ > . If u-APSP can can be solved in O ( n ρ − ε ) time for some ε > , then M ⋆ ( n, n ρ , n | n − ρ ) can also be solved in O ( n ρ − ε ) time. The Min-Plus product of two n × n matrices with arbitrary integer entries is known to be equivalentto APSP with arbitrary integer entries [11], so that their running times are the same, up to constant factors.All known algorithms for directed unweighted APSP (including [3, 36] and others), make calls to Min-Plusproduct of rectangular matrices with integer entries that can be as large as say n . . It is completely unclear,however, why a problem in unweighted graphs such as u-APSP should require the computation of Min-Plusproducts of matrices with such large entries. Theorem 1.1 surprisingly shows that it does. Moreover, it In this paper, [ ± c ] = {− c , . . . , c } and [ c ] = { , . . . , c } . The ˜ O notation hides polylogarithmic factors (althoughconditions of the form c = ˜ O (1) may be relaxed to c ≤ n o (1) if we allow extra n o (1) factors in the ˜ O time bounds). ( n ρ , n ρ ) -fine-grained reductions (see [31] for asurvey of fine-grained complexity). In particular, if ω = 2 (or more generally when ω (1 , ,
1) = 2 ), theseare all problems that are n . -fine-grained equivalent.Recall that in the All-Pairs Longest Paths (APLP) problem, we want to output for every pair of vertices s, t the weight of the longest path from s to t . While APLP is NP-hard in general, it is efficiently solvable inDAGs. In the c Red-APSP problem, for a given graph in which some edges can be colored red, we want tooutput for every pair of vertices s, t the weight of the shortest path from s to t that uses at most c red edges.For convenience, we call all non-red edges blue.We use the following convention for problem names: the prefix “u-” is for unweighted graphs; the prefix“ [ c ] -” is for graphs with weights in [ c ] (similarly for “ [ ± c ] -” and for other ranges). Input graphs aredirected unless stated otherwise. Theorem 1.2.
The following problems either all have O ( n ρ − ε ) time algorithms for some ε > , or noneof them do, assuming that c = ˜ O (1) : • M ⋆ ( n, n ρ , n | n − ρ ) , • u-APSP, • [ ± c ] -APSP for directed graphs without negative cycles, • u-APLP for DAGs, • [ ± c ] -APLP for DAGs, • u- c Red-APSP for undirected graphs for any ≤ c ≤ ˜ O (1) . Interestingly, while u- Red-APSP in undirected graphs above is equivalent to u-APSP and hence im-proving upon its ˜ O ( n ρ ) runtime would be difficult, we show that u- Red-APSP in undirected graphs canbe solved in ˜ O ( n ω ) time via a modification of Seidel’s algorithm, and hence there is a seeming jump incomplexity in u- c Red-APSP from c = 1 to c = 2 .Besides the above equivalences we provide some interesting reductions from u-APSP to other well-studied matrix product and shortest paths problems.Lincoln, Polak and Vassilevska W. [19] reduce u-APSP to some matrix product problems such as All-Edges Monochromatic Triangle and the (min , max) -Product studied in [26, 28] and [10, 27] respectively.Using the equivalence of u-APSP and M ⋆ ( n, n/ℓ, n | ℓ − p ) , we can reduce u-APSP to another matrixproduct called Min Witness Equality Product (MinWitnessEq), where we are given n × n integer matrices A and B , and are required to compute min { k ∈ [ n ] : A [ i, k ] = B [ k, j ] } for every pair of ( i, j ) . This can beviewed as a merge of the Min Witness product [9] and Equality Product problems [16, 30].Another natural variant of APSP is the problem of approximating shortest path distances. Zwick [36]presented an ˜ O ( n ω log M ) time algorithm for computing a (1 + ε ) -multiplicative approximation for all pair-wise distances in a directed graph with integer weights in [ M ] , for any constant ε > . This is essentiallyoptimal since any such approximation algorithm can be used to multiply n × n Boolean matrices. Recently, there has been renewed interest in studying the Min Witness product, due to a breakthrough [15] on the All-PairsLCA in DAGs problem, which was one of the original motivations for studying Min Witness. Bringmann et al. [6] considered the more unusual setting of very large M , where the log M factor is to be avoided.
3n arguably better notion of approximation is to provide an additive approximation, i.e. outputting forevery u, v an estimate D ′ [ u, v ] for the distance D [ u, v ] such that D [ u, v ] ≤ D ′ [ u, v ] ≤ D [ u, v ] + E , where E is an error that can depend on u and v .At ICALP’08, Roditty and Shapira [22] studied the following variant: given an unweighted directedgraph and a constant p ∈ [0 , , compute for all u, v an estimate D ′ [ u, v ] with D [ u, v ] ≤ D ′ [ u, v ] ≤ D [ u, v ] + D [ u, v ] p . They gave an algorithm with running time ˜ O (max ℓ min { n /ℓ, M ⋆ ( n, n/ℓ − p , n | ℓ − p ) } ) . For example, for p = 0 , this matches the time complexity of Zwick’s exact algorithm for u-APSP;for p = 1 , this matches Zwick’s ˜ O ( n ω ) -time algorithm with constant multiplicative approximation factor.For p = 0 . , with the current rectangular matrix multiplication bounds [18], the running time is O ( n . ) .We obtain an improved running time: Theorem 1.3.
For any p ∈ [0 , , given a directed unweighted graph, one can obtain additive D [ u, v ] p approximations to all distances D [ u, v ] in time ˜ O (max ℓ M ⋆ ( n, n/ℓ, n | ℓ − p )) . The improvement over Roditty and Shapira’s running time is substantial. For example, for all p ≥ . ,the time bound is O ( n . ) (the current matrix multiplication running time), whereas their algorithm onlyachieves O ( n . ) for p = 1 . Our result also answers one of Roditty and Shapira’s open question (onwhether ˜ O ( n ω ) time is possible for any p < ), if ω > .The new algorithm is also optimal (ignoring logarithmic factors) in a strong sense, as our reductiontechnique shows that for all ℓ , M ⋆ ( n, n/ℓ, n | ℓ − p ) can be tightly reduced to the additive D [ u, v ] p ap-proximation of APSP. In particular, u-APSP with constant additive error is fine-grained equivalent to exactu-APSP.The All-Pairs Lightest Shortest Paths (APLSP) problem studied in [7, 35] asks to compute for everypair of vertices s, t the distance from s to t (with respect to the edge weights) and the smallest number ofedges over all shortest paths from s and t . Traditional shortest-path algorithms can be easily modified tofind the lightest shortest paths, but not the faster matrix-multiplication-based algorithms. Our reduction foru- c Red-APSP can be easily modified to reduce M ⋆ ( n, n ρ , n | n − ρ ) to { , } -APLSP in undirected graphs,which can be viewed as a conditional lower bound of n ρ − o (1) for the latter problem. Corollary 1.4. If { , } -APLSP in undirected graphs is in O ( n ρ − ε ) time for ε > , then so is M ⋆ ( n, n ρ , n | n − ρ ) . The fastest known algorithm to date for { , } -APLSP, or more generally, [ c ] -APLSP for c = ˜ O (1) ,for directed or undirected graphs is by Zwick [35] from STOC’99 and runs in O ( n . ) time with thecurrent best bounds for rectangular matrix multiplication (the running time would be ˜ O ( n / ) if ω = 2 ).Chan [7] (STOC’07) improved this running time to ˜ O ( n (3+ ω ) / ) ≤ O ( n . ) but only if the weights are positive , i.e., for ([ c ] − { } ) -APLSP (and so his result does not hold for { , } -APLSP).Both Zwick’s and Chan’s algorithms solve a more general problem, Lex -APSP, in which one is given adirected graph where each edge e is given two weights w ( e ) , w ( e ) and one wants to find for every pair ofvertices u, v the lexicographic minimum over all u - v paths π of ( P e ∈ π w ( e ) , P e ∈ π w ( e )) . Then APLSPis Lex -APSP when all w weights are , and the related All-Pairs Shortest Lightest Paths (APSLP) problemis when all w weights are .To complement the conditional lower bound for APLSP, and hence Lex -APSP, we present new algo-rithms for [ c ] -Lex -APSP for c = ˜ O (1) , both (slightly) improving Chan’s running time and also allowingzero weights, something that Chan’s algorithm couldn’t support. Theorem 1.5. [ c ] -Lex -APSP can be solved in O ( n . ) time for any c = ˜ O (1) . ω = 2 , the above running time would be ˜ O ( n . ) , improving Zwick’s previous ˜ O ( n / ) bound [35]and matching our conditional lower bound n ρ − o (1) . For undirected graphs with positive weights in [ c ] −{ } , we further improve the running time to O ( n . ) under the current matrix multiplication bounds.We next consider the natural problem, APSP , of counting the number of shortest paths for everypair of vertices in a graph. This problem needs to be solved, for example, when computing the so-called
Betweenness Centrality (BC) of a vertex. BC is a well-studied measure of vertex importance in socialnetworks. If we let C [ s, t ] be the number of shortest paths between s and t , and C v [ s, t ] be the number ofshortest paths between s and t that go through v , then BC ( v ) = P s,t = v C v [ s, t ] /C [ s, t ] and the BC problemis to compute BC ( v ) for a given graph and a given node v .Prior work [4] showed that APSP and BC in m -edge n -node unweighted graphs can be computed in O ( mn ) time via a modification of Breadth-First Search (BFS). However, all prior algorithms assumed amodel of computation where adding two integers of arbitrary size takes constant time. In the more realisticword-RAM model (with O (log n ) bit words), these algorithms would run in ˜Θ( mn ) time, as there areexplicit examples of graphs with m edges (for any m , a function of n ) for which the shortest paths countshave Θ( n ) bits. In particular, the best running time in terms of n so far has been ˜ O ( n ) .We provide the first genuinely ˜ O ( n ) time algorithm for APSP, and thus Betweenness Centrality, indirected unweighted graphs.
Theorem 1.6. u- APSP can be solved in ˜ O ( n ) time by a combinatorial algorithm. This runtime cannot be improved since there are graphs for which the output size is Ω( n ) .Since the main difficulty of the APSP problem comes from the counts being very large, it is interestingto consider variants that mitigate this. Let U ≤ n ˜ O (1) . Let mod U APSP be the problem of computing allpairwise counts modulo U . Let ≤ U APSP be the problem of computing for every pair of nodes u, v theminimum of their count and U (think of U as a “cap”). Finally, let approx- U APSP be the problem ofcomputing a (1 + 1 /U ) -approximation of all pairwise counts (think of keeping the log U most significantbits of each count).We obtain the following result for u- ≤ U APSP in directed graphs:
Theorem 1.7. u- ≤ U APSP (in directed graphs) can be solved in n ρ polylog U ≤ n ρ + o (1) time.Furthermore, for any U ≥ , if u- ≤ U APSP can be solved in O ( n ρ − ε ) time for some ε > , then socan u-APSP (with randomization). For any ≤ U ≤ ˜ O (1) , the converse is true as well. Thus, we get a conditionally optimal algorithm for u- ≤ U APSP. For ≤ U ≤ ˜ O (1) , the theoremabove gives a fine-grained equivalence between u- ≤ U APSP and u-APSP; in particular, for U = 2 , theproblem corresponds to testing uniqueness for the shortest path of each pair. (For large U , however, it is nota fine-grained equivalence since the algorithm for u- ≤ U APSP does not go through Min-Plus product, butrather directly uses fast matrix multiplication.)Our algorithm from Theorem 1.7 is based on Zwick’s algorithm for u-APSP. We show that one canalso modify Seidel’s algorithm for u-APSP in undirected graphs to obtain ˜ O ( n ω ) time algorithms for u- ≤ U APSP and u- mod U APSP in undirected graphs.
Theorem 1.8. u- ≤ U APSP and u- mod U APSP in undirected graphs can be solved in ˜ O ( n ω log U ) time. Brandes presented further practical improvements as well. One example is an ( n/ -layered graph where the first n/ layers have vertices each and the last layers have n/ vertices each. The i -th layer and the ( i + 1) -th layer are connected by a complete bipartite graph for each ≤ i ≤ n/ , while thelast two layers are connected by O ( m ) edges.
5e also show that u- approx- U APSP in undirected graphs can be solved in O ( n . polylog U ) time,somewhat surprisingly, by a slight modification of our undirected Lex -APSP algorithm (despite the appar-ent dissimilarity between the two problems). Paper Organization and Techniques.
In Section 3, we show the web of reductions around u-APSP,proving Theorem 1.1, Theorem 1.2, the hardness of additive D [ u, v ] p approximate u-APSP and the hardnessof u- ≤ U APSP in Theorem 1.7.In Section 4, we give our algorithms for approximating APSP with additive errors, proving Theorem 1.3.In Appendix B, we describe our algorithms for Lex -APSP. In Appendix C, we consider various versionsof APSP. In Appendix C.1, we prove Theorem 1.7. In Appendix C.2, we prove Theorem 1.8. In Ap-pendix C.3, we give an algorithm for u- approx- U APSP. Finally, In Appendix C.4, we give an algorithm foru- ≤ U APSP to complete the proof of Theorem 1.7.For approximating APSP with additive error, we propose an interesting two-phase variant of Zwick’salgorithm [36]. Zwick’s algorithm computes distance products of n × ( n/ℓ ) with ( n/ℓ ) × n matrices for ℓ ina geometric progression. Our idea is to do less during the first phase, computing products of ( n/ℓ ) × ( n/ℓ ) with ( n/ℓ ) × n matrices instead. We complete the work during a second phase. The observation is thatfor the APSP approximation problem, we can afford to perform the distance computation in the first phase exactly , but use approximation to speed up the second phase. The resulting approximation algorithm is evensimpler than Roditty and Shapira’s previous (slower) algorithm [22].Our Lex -APSP algorithm for directed graphs also uses this two-phase approach, but in a more sophis-ticated way to control the size of the numbers in the rectangular matrix products. A number of interestingnew ideas are needed.To further illustrate the power of this two-phase approach, we also show (in Appendix F) how the ideacan lead to an alternative ˜ O ( c n ω ) time algorithm for the standard [ c ] -APSP problem for undirected graphs,rederiving Shoshan and Zwick’s result [24] in an arguably simpler way. This may be of independent interest(as Shoshan and Zwick’s algorithm has complicated details).Our Lex -APSP algorithm for undirected graphs uses small dominating sets for high-degree vertices, anidea of Aingworth et al. [1]. Originally, this idea was for developing combinatorial algorithms for approxi-mate shortest paths that avoid matrix multiplication. Interestingly, we show that this idea can be combinedwith (rectangular) matrix multiplication to compute exact Lex shortest paths. The computation model of all algorithms and reductions in this paper is the word-RAM model with O (log n ) bit words.We let M ( n , n , n ) denote the best known running time for multiplying an n × n by an n × n matrix over the integers. We use ω ( a, b, c ) to denote the rectangular matrix multiplication exponent, i.e. thesmallest real number z such that M ( n a , n b , n c ) ≤ O ( n z + ε ) for all ε > . In particular, let ω = ω (1 , , .It is known that ω ∈ [2 , . [2, 17, 29]. The best known bounds for ω ( a, b, c ) are in [18].Let M ⋆ ( n , n , n | ℓ , ℓ ) be the time to compute the Min-Plus product of an n × n matrix A withan n × n matrix B , where all finite entries of A are from [ ℓ ] and all finite entries of B are from [ ℓ ] . Letus also denote M ⋆ ( n , n , n | ℓ ) := M ⋆ ( n , n , n | ℓ, ℓ ) . It is known [3] that M ⋆ ( n , n , n | ℓ ) ≤ ˜ O ( ℓ · M ( n , n , n )) . This algorithm in [3] first replaces each entry e in both matrices A, B by ( n + 1) e ,then uses fast rectangular matrix multiplication to compute the product of the new matrices A, B . Sinceeach arithmetic operation takes ˜ O ( ℓ ) time, the running time follows.More generally, let M ⋆ ( n , n , n | m , m , m | ℓ , ℓ ) be the time to compute m given entries of6he Min-Plus product of an n × n matrix A with an n × n matrix B , where A has at most m finiteentries, all from [ ℓ ] , and B has at most m finite entries, all from [ ℓ ] . u-APSP [ ± c ] -APSPu-APLP in DAG [ ± c ] -APLP in DAG Undir. u- c Red-APSPM ⋆ ( n, n ρ , n | n − ρ ) u- ≤ c APSP C o r . . C o r . . T h m . . T h m . . Thm. 3.5Thm. 1.7Undir. { , } -APLSP u- mod c APSP MinWitnessEq C o r . . Thm 3.8 T h m . . Figure 1: The web of (a subset of) the reductions in this paper. All reductions are ( n ρ , n ρ ) -fine grainedreductions, where ρ is such that ω (1 , ρ,
1) = 1 + 2 ρ . The problems in the bounding box are sub n ρ -equivalent. Here, c = ˜ O (1) , and ≤ c ≤ ˜ O (1) .Here we consider the All-Pairs Shortest Paths (APSP) problem in unweighted directed graphs, or moregenerally in directed graphs with integer weights in [ ± c ] with c = ˜ O (1) and no negative cycles. Zwick[36] showed that this problem in n -node graphs can be solved in time ˜ O ( n ρ ) time where ρ is such that ω (1 , ρ,
1) = 1 + 2 ρ . For the current best bounds on rectangular matrix multiplication [18], ρ is roughly . .Zwick’s algorithm can be viewed as a reduction to rectangular Min-Plus matrix multiplication. Thealgorithm proceeds in stages, for each ℓ from to log / ( n − ρ ) .In stage ℓ , up to logarithmic factors, one needs to compute the Min-Plus product of two matrices A ℓ and B ℓ where A ℓ has dimensions n × n/ (3 / ℓ and B ℓ has dimensions n/ (3 / ℓ × n and both matriceshave entries bounded by (3 / ℓ . Intuitively, this computes the pairwise distances that are roughly (3 / ℓ .After stage log / ( n − ρ ) , the algorithm also runs Dijkstra’s algorithm to and from ˜ O ( n ρ ) nodes S sampledrandomly and uses ˜ O ( n ρ ) extra time to complete the computation of the distances by considering forevery u, v ∈ V , min s ∈ S { D [ u, s ] + D [ s, v ] } . This can also be viewed as using the brute-force algorithm tocompute the Min-Plus products when (3 / ℓ ≥ n − ρ .The total running time is within logarithmic factors of n ρ + log / ( n − ρ ) X ℓ =0 M ⋆ ( n, n/ (3 / ℓ , n | (3 / ℓ ) , where M ⋆ ( n , n , n | M ) is the Min-Plus product running time for matrices with entries in { , . . . , M } and dimensions n × n by n × n . With the known bounds for Min-Plus product, M ⋆ ( n, n τ , n | M ) ≤ If there are negative weights, one also needs to run single source shortest paths (SSSP) from a node, as in Johnson’s algorithmand then reweight the edges so that they are nonnegative. SSSP can be solved in ˜ O (( m + n . ) log ( c )) = ˜ O ( n ) time [25]. O ( M n ω (1 ,τ, ) , and the running time of Zwick’s algorithm becomes ˜ O ( n ρ + n − ρ + ω (1 ,ρ, ) , which is ˜ O ( n ρ ) when ω (1 , ρ,
1) = 1 + 2 ρ .If ω = 2 , then ρ is / and the running time of Zwick’s algorithm becomes ˜ O ( n . ) . This running timeis a seeming barrier for the APSP problem in directed graphs.In Appendix A.1 we prove the following technical theorem which rephrases Zwick’s algorithm [36] asa reduction. Theorem 3.1.
Let ρ be the solution to ω (1 , ρ,
1) = 1+ 2 ρ . If the Min-Plus product of an n × n ρ matrix by an n ρ × n matrix where both matrices have integer entries bounded by n − ρ (denoted as M ⋆ ( n, n ρ , n | n − ρ ) )can be computed in O ( n ρ − ǫ ) time for some ǫ > , then APSP in directed n node graphs with integer edgeweights in [ ± c ] for c = ˜ O (1) can be solved in O ( n ρ − ǫ ′ ) time for ǫ ′ > . If ω = 2 , the above theorem statement becomes: If the Min-Plus problem of an n × √ n matrix by a √ n × n matrix where both matrices have integer entries bounded by √ n can be computed in O ( n . − δ ) timefor some δ > , then APSP in directed n node graphs with integer edge weights in [ ± c ] for c = ˜ O (1) canbe solved in O ( n . − δ ′ ) time for δ ′ > .We will show a reduction in the reverse direction as well, showing that rectangular Min-Plus productwith suitably bounded entries can be reduced back to unweighted directed APSP. Theorem 3.2.
For any fixed k ∈ (0 , , M ⋆ ( n, n k , n | n − k ) can be reduced in O ( n ) time to APSP in adirected unweighted graph with O ( n ) vertices. A consequence of Theorem 3.1, and the fact that u-APSP is a special case of [ ± c ] -APSP for directedgraphs without negative cycles, is the following equivalence. Corollary 3.3.
Let ρ be such that ω (1 , ρ,
1) = 1 + 2 ρ . Then u-APSP, [ ± c ] -APSP for directed graphswithout negative cycles, and M ⋆ ( n, n ρ , n | n − ρ ) are sub- n ρ fine-grained equivalent for c = ˜ O (1) . In particular, if ω = 2 , APSP in directed unweighted graphs is sub- n . fine-grained equivalent to theMin-Plus problem of an n × √ n matrix by a √ n × n matrix where both entries have integer entries boundedby √ n . Proof of Theorem 3.2.
Let A be an n × n k matrix and let B be an n k × n matrix, both with entries in { , . . . , n − k } .We will create a directed graph as follows. Let I be a set of n nodes, which represent the rows of A . Let J be a set of n nodes, which represent the columns of B .For every p ∈ [ n k ] corresponding to a column of A (or row of B ), create a path of n − k + 1 nodes: X ( p ) := x p,n − k → x p,n − k − → . . . → x p, → y p, → y p, → . . . → y p,n − k . For every i ∈ [ n ] and p ∈ [ n k ] , consider t = A [ i, p ] ∈ [ n − k ] . Add an edge from i ∈ I to x p,t . Similarly,for every j ∈ [ n ] and p ∈ [ n k ] , consider t ′ = B [ p, j ] ∈ [ n − k ] . Add an edge from y p,t ′ to j ∈ J .Now, consider some i ∈ [ n ] , p ∈ [ n k ] , j ∈ [ n ] and A [ i, p ] + B [ p, j ] . If we consider the path consistingof ( i, x p,A [ i,p ] ) , ( y p,B [ p,j ] , j ) and the subpath of X ( p ) between x p,A [ i,p ] and y p,B [ p,j ] , its length is exactly A [ i, p ] + B [ p, j ] . Also, any path from i to j is of this form. Thus, the shortest path from i ∈ I to j ∈ J inthe created graph is exactly of length p { A [ i, p ] + B [ p, j ] } , and thus computing APSP in the directedunweighted graph we have created computes the Min-Plus product of A and B .The number of vertices in the graph is O ( n k · n − k ) = O ( n ) . (cid:3) Ji jp x p, A i,p B p,j Figure 2: Sketch of the construction in proof of Theorem 3.2. For each vertex i and path p , we add an edgefrom i to a vertex on the path p whose distance to the middle point x p, on the path is A i,p . For each path p and vertex j , we add an edge from a vertex on the path whose distance from the middle point x p, on thepath is B p,j to vertex j .One consequence of Corollary 3.3 is that u-APSP and computing the predecessor matrix in unweighteddirected APSP are also sub- n ρ fine-grained equivalent. It was known that Zwick’s algorithm [36] cancompute the predecessor matrix for unweighted directed APSP, which can also be viewed as a sub- n ρ time reduction from computing the predecessor matrix to M ⋆ ( n, n ρ , n | n − ρ ) . Also, if we can computethe predecessor matrix for the graph constructed in the above proof, we would know which path X ( p ) theshortest path from i to j uses, which in turn solves M ⋆ ( n, n ρ , n | n − ρ ) . Thus, computing the predecessormatrix for unweighted directed APSP is sub- n ρ fine-grained equivalent to M ⋆ ( n, n ρ , n | n − ρ ) , and thusalso equivalent to u-APSP.Zwick’s algorithm is general enough to apply to some variants of APSP. One example is the All-PairsLongest Paths (APLP) problem in DAGs. To compute APLP in a DAG, we first negate the weight of everyedge, then the problem becomes APSP, on which we can directly apply Zwick’s algorithm. Therefore,Zwick’s algorithm show reductions from u-APLP and [ ± c ] -APLP in DAGs to M ⋆ ( n, n ρ , n | n − ρ ) .Perhaps more surprisingly, the other direction of the reduction also holds. Therefore, APLP in DAG andAPSP in graphs with weights bounded by ˜ O (1) are sub- O ( n ρ ) equivalent. Theorem 3.4.
Let ρ be such that ω (1 , ρ,
1) = 1 + 2 ρ . Then u-APLP in DAGs, [ ± c ] -APLP in DAGs and M ⋆ ( n, n ρ , n | n − ρ ) are sub- n ρ fine-grained equivalent. The proof of Theorem 3.4 follows from the same approach and appears in Appendix A.2.All problems shown equivalent to u-APSP above are problems on directed graphs. One natural questionis that whether some problems on undirected graphs are also in this equivalence class, or whether we canshow some undirected graph problems require n ρ − o (1) time if we assume problems in this equivalenceclass also require n ρ − o (1) time. To answer these questions, we first consider the u- c Red-APSP problem.
Theorem 3.5.
Let ρ be such that ω (1 , ρ,
1) = 1 + 2 ρ . u- c Red-APSP for ≤ c = ˜ O (1) and M ⋆ ( n, n ρ , n | n − ρ ) are sub- n ρ fine-grained equivalent. The proof of Theorem 3.5 uses a similar graph construction and is in Appendix A.3.By slightly modifying the proof of Theorem 3.5, we can show conditional hardness for APLSP onundirected graphs where the edge weights are in { , } . The proof is in Appendix A.3. Corollary 3.6.
Let ρ be such that ω (1 , ρ,
1) = 1 + 2 ρ . Suppose M ⋆ ( n, n ρ , n | n − ρ ) requires n ρ − o (1) time. Then APLSP on undirected graphs where the edge weights can be { , } also requires n ρ − o (1) time. Vertex-Weighted APSP in undirected graphs, wherethe vertex weights may be large. (The current best algorithms for Vertex-Weighted APSP for directedgraphs [7, 34] had running time about O ( n . ) ; the bound is ˜ O ( n / ) if ω = 2 . No better algorithms wereknown in the undirected graphs—which our conditional lower bound attempts to explain.) The proof is inAppendix A.3. Corollary 3.7.
Let ρ be such that ω (1 , ρ,
1) = 1 + 2 ρ . Suppose M ⋆ ( n, n ρ , n | n − ρ ) requires n ρ − o (1) time. Then vertex-weighted APSP on undirected graphs where the vertex weights are in [ O ( n − ρ )] alsorequires n ρ − o (1) time. The conditional hardness for u- mod U APSP and u- ≤ U APSP for any U ≥ can be proved by com-bining our graph construction with randomized techniques for a unique variant of Min-Plus product; seeAppendix A.4. Theorem 3.8.
Let ρ be such that ω (1 , ρ,
1) = 1 + 2 ρ . Suppose M ⋆ ( n, n ρ , n | n − ρ ) requires n ρ − o (1) time(with randomization). Then u- mod U APSP and u- ≤ U APSP for any U ≥ requires n ρ − o (1) time. In Section 4, we will give an algorithm for approximating APSP with sublinear additive errors. Usingthe same technique as our reductions from Rectangular Min-Plus product to APSP problems, we can showa conditional lower bound for this problem.
Theorem 3.9.
Given a directed unweighted graph G = ( V, E ) with n vertices and a function f > where ℓf ( ℓ ) is nondecreasing. Suppose we can approximate the shortest-path distance D [ u, v ] with additive error f ( D [ u, v ]) , for all u, v ∈ V in T ( n ) time, then max ≤ ℓ ≤ n M ⋆ (cid:16) n, n/ℓ, n | ℓf ( ℓ ) (cid:17) ≤ O ( T ( n )) .Proof. Fix any ≤ ℓ ≤ n . First, note that M ⋆ (cid:16) n, n/ℓ, n | ℓf ( ℓ ) (cid:17) = Θ (cid:16) M ⋆ (cid:16) n, n/ℓ, n | ℓCf ( ℓ ) (cid:17)(cid:17) for anyconstant C . Here, we take C = 12 to be a large enough constant.Suppose we are given an n × n/ℓ matrix A and an n/ℓ × n matrix B , whose entries are positive integersbounded by ℓ f ( ℓ ) , and we want to compute their Min-Plus product A ⋆ B . We use a similar reduction asthe one in the proof of Theorem 3.2, but stretching the length of the middle paths. Specifically, we createvertex set I of size n , vertex set J of size n , and n/ℓ paths of the form X ( p ) := x p, ℓ f ( ℓ ) · · · x p, y p, · · · y p, ℓ f ( ℓ ) . From x p,i to x p,i − and y p,j to y p,j +1 , we embed paths of length f ( ℓ ) ; from x p, to y p, , we embed a path of length ℓ − . Similar to previous reductions, for every i ∈ [ n ] = I and p ∈ [ n/ℓ ] ,we add an edge from i to x p,A [ i,p ] ; for every j ∈ [ n ] = J and p ∈ [ n/ℓ ] , we add an edge from j to y p,B [ p,j ] .Then the distance from i ∈ I to j ∈ J in this graph equals ℓ + 6 f ( ℓ )( A ⋆ B )[ i, j ] .Since ≤ ( A ⋆ B )[ i, j ] ≤ ℓ f ( ℓ ) , we must have ℓ ≤ ℓ + 6 f ( ℓ )( A ⋆ B )[ i, j ] ≤ ℓ . Since ℓf ( ℓ ) isnondecreasing, we must have f ( tℓ ) ≤ tf ( ℓ ) for any t ≥ , and thus f ( ℓ + 6 f ( ℓ )( A ⋆ B )[ i, j ]) ≤ f ( ℓ ) .Therefore, an f ( ℓ + 6 f ( ℓ )( A ⋆ B )[ i, j ]) -additive approximation of APSP can determine that the distancefrom i ∈ I to j ∈ J is in ℓ + 6 f ( ℓ )( A ⋆ B )[ i, j ] ± f ( ℓ ) , from which we can compute ( A ⋆ B )[ i, j ] easilysince ( A ⋆ B )[ i, j ] must be an integer. (cid:3) Finally, we give a reduction from u-APSP to Min Witness Equality, where we are given n × n integermatrices A and B , and are required to compute min { k ∈ [ n ] : A [ i, k ] = B [ k, j ] } for every pair of ( i, j ) .Reductions from u-APSP to matrix product problems are considered by Lincoln et al. [19], where theyshow reductions from u-APSP to the All-Edges Monochromatic Triangle problem and (min , max) -productproblem, but their techniques do not seem to apply to Min Witness Equality.The proof of the following theorem is deferred to Appendix A.5.10 heorem 3.10. Let ρ be such that ω (1 , ρ,
1) = 1 + 2 ρ . Suppose M ⋆ ( n, n ρ , n | n − ρ ) requires n ρ − o (1) time. Then Min Witness Equality requires n ρ − o (1) time. In this section, we give an algorithm for approximate APSP with additive errors in directed unweightedgraphs, to match the lower bound that we have just proved in Theorem 3.9 (ignoring logarithmic factors).Namely, our algorithm achieves running time ˜ O (max ℓ M ⋆ ( n, n/ℓ, n | ℓ − p )) , which improves Roditty andShapira’s previous algorithm [22] with running time ˜ O (max ℓ min { n /ℓ, M ⋆ ( n, n/ℓ − p , n | ℓ − p ) } ) .Let D [ u, v ] denote the shortest-path distance from u to v . Overview.
The new algorithm is a variation of Zwick’s exact u-APSP algorithm [36], and is actuallysimpler than Roditty and Shapira’s algorithm. The idea is to compute as many as the shortest-path distances exactly as we can in ˜ O ( n ω ) time in an initial phase. In the second phase, we apply rectangular matrixmultiplication to submatrices computed from the first phase, where entries are approximated by roundingand rescaling. Preliminaries.
For every ℓ that is a power of 3/2, let R ℓ ⊆ V be a subset of ˜ O ( n/ℓ ) vertices that hitsall shortest paths of length ℓ/ [36]. (For example, a random sample works with high probability.) We mayassume that R (3 / i ⊇ R (3 / i +1 (because otherwise, we can add R (3 / j to R (3 / i for all j > i and the sizebound would still hold). For subsets S , S ⊆ V , let D ( S , S ) denote the submatrix of D containing theentries for ( u, v ) ∈ S × S . Phase 1.
We first solve the following subproblem: compute D [ u, v ] (exactly) for all ( u, v ) ∈ R ℓ × V with D [ u, v ] ≤ ℓ , and similarly for all ( u, v ) ∈ V × R ℓ with D [ u, v ] ≤ ℓ .Suppose we have already computed D [ u, v ] for all ( u, v ) ∈ R ℓ/ × V with D [ u, v ] ≤ ℓ/ , andsimilarly for all ( u, v ) ∈ V × R ℓ/ with D [ u, v ] ≤ ℓ/ .We take the Min-Plus product D ( R ℓ , R ℓ/ ) ⋆ D ( R ℓ/ , V ) . For each ( u, v ) ∈ R ℓ × V , if its outputentry is smaller than the current value of D [ u, v ] , we reset D [ u, v ] to the smaller value. Similarly, we takethe Min-Plus product D ( V, R ℓ/ ) ⋆ D ( R ℓ/ , R ℓ ) . For each ( u, v ) ∈ V × R ℓ , if its output entry is smallerthan the current value of D [ u, v ] , we reset D [ u, v ] to the smaller value. We reset all entries greater than ℓ to ∞ . To justify correctness, observe that for any shortest path π of length between ℓ/ and ℓ , the middle (2 ℓ/ / ℓ/ vertices must contain a vertex of R ℓ/ , which splits π into two subpaths each of length atmost ℓ/ ℓ/ ≤ ℓ/ .We do the above for all ℓ ’s that are powers of / . The total cost is ˜ O (cid:18) max ℓ M ⋆ ( n/ℓ, n/ℓ, n | ℓ ) (cid:19) ≤ ˜ O (cid:18) max ℓ ℓ · M ⋆ ( n/ℓ, n/ℓ, n ) (cid:19) ≤ ˜ O (cid:18) max ℓ ℓ ( n/ℓ ) ω (cid:19) = ˜ O ( n ω ) . Phase 2.
Next we approximate all shortest-path distances D [ u, v ] where D [ u, v ] is between ℓ/ and ℓ , with additive error O ( f ( ℓ )) for a given function f , as follows:We compute the Min-Plus product D ( V, R ℓ/ ) ⋆ D ( R ℓ/ , V ) , keeping only entries bounded by O ( ℓ ) .As we allow additive error O ( f ( ℓ )) , we round entries to multiples of f ( ℓ ) . This takes ˜ O ( M ⋆ ( n, n/ℓ, n | ℓf ( ℓ ) )) time.To justify correctness, observe as before that in any shortest path π of length between ℓ/ and ℓ , somevertex in R ℓ/ splits the path into two subpaths of length at most ℓ/ .11e do the above for all ℓ ’s that are powers of / . The total cost is ˜ O (cid:16) max ℓ M ⋆ ( n, n/ℓ, n | ℓf ( ℓ ) ) (cid:17) . Standard techniques for generating witnesses for matrix products can be applied to recover the shortestpaths (e.g., see [12, 36]).
Theorem 4.1.
Given a directed unweighted graph G = ( V, E ) with n vertices and a function f where ℓf ( ℓ ) is nondecreasing, we can approximate the shortest-path distance D [ u, v ] with additive error O ( f ( D [ u, v ])) for all u, v ∈ V , in ˜ O (cid:16) max ℓ M ⋆ ( n, n/ℓ, n | ℓf ( ℓ ) ) (cid:17) time. Remark.
For f ( ℓ ) = ℓ p , we can upper-bound the running time by ˜ O (cid:18) max ℓ M ⋆ ( n, n/ℓ, n | ℓ − p ) (cid:19) ≤ ˜ O (cid:0) L − p · M ( n, n/L, n ) + n /L (cid:1) ≤ ˜ O (cid:16) L − p ( n o (1) + n ω /L ( ω − / (1 − α ) ) + n /L (cid:17) for any choice of L , where α is the rectangular matrix multiplication exponent (satisfying ω (1 , , α ) = 2 ).For example, we can set L = n − ω , and for p > − min { ω − − α , ω − − ω } , get optimal ˜ O ( n ω ) running time.In fact, with the current rectangular matrix multiplication bounds we get ˜ O ( n . ) time for p ≥ . ≥ ( ω (1 , . , − · . − / (1 − . . Roditty and Shapira [22] specifically asked whether thereexists p < for which ˜ O ( n ω ) time is possible; we have thus answered their question affirmatively if ω > . Remark.
For directed graphs with weights from [ c ] , the running time is ˜ O (cid:18) c n ω + max ℓ M ⋆ ( n, n/ℓ, n | c ℓf ( ℓ ) ) (cid:19) . References [1] Donald Aingworth, Chandra Chekuri, Piotr Indyk, and Rajeev Motwani. Fast estimation of diameterand shortest paths (without matrix multiplication).
SIAM J. Comput. , 28(4):1167–1181, 1999.[2] Josh Alman and Virginia Vassilevska Williams. A refined laser method and faster matrix multiplica-tion. In
Proceedings of the 32nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA) , pageto appear, 2021.[3] Noga Alon, Zvi Galil, and Oded Margalit. On the exponent of the all pairs shortest path problem.
J.Comput. Syst. Sci. , 54(2):255–262, 1997.[4] Ulrik Brandes. A faster algorithm for betweenness centrality.
The Journal of Mathematical Sociology ,25(2):163–177, 2001.[5] Ulrik Brandes. On variants of shortest-path betweenness centrality and their generic computation.
Soc.Networks , 30(2):136–145, 2008.[6] Karl Bringmann, Marvin K ¨unnemann, and Karol Wegrzycki. Approximating APSP without scaling:equivalence of approximate min-plus and exact min-max. In
Proceedings of the 51st Annual ACMSymposium on Theory of Computing (STOC) , pages 943–954, 2019.127] Timothy M. Chan. More algorithms for all-pairs shortest paths in weighted graphs.
SIAMJ. Comput. , 39(5):2075–2089, 2010. URL: https://doi.org/10.1137/08071990X , doi:10.1137/08071990X .[8] Timothy M. Chan. All-pairs shortest paths for unweighted undirected graphsin o ( mn ) time. ACM Trans. Algorithms , 8(4):34:1–34:17, 2012. URL: https://doi.org/10.1145/2344422.2344424 , doi:10.1145/2344422.2344424 .[9] Artur Czumaj, Miroslaw Kowaluk, and Andrzej Lingas. Faster algorithms for finding lowest commonancestors in directed acyclic graphs. Theor. Comput. Sci. , 380(1-2):37–46, 2007.[10] Ran Duan and Seth Pettie. Fast algorithms for (max, min)-matrix multiplication and bottleneck shortestpaths. In
Proceedings of the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA) ,pages 384–391, 2009.[11] Michael J Fischer and Albert R Meyer. Boolean matrix multiplication and transitive closure. In
Proceedings of the 12th Annual Symposium on Switching and Automata Theory (SWAT) , pages 129–131, 1971.[12] Zvi Galil and Oded Margalit. Witnesses for Boolean matrix multiplication and for transitive closure.
J. Complex. , 9(2):201–221, 1993.[13] Zvi Galil and Oded Margalit. All pairs shortest distances for graphs with small integer length edges.
Inf. Comput. , 134(2):103–139, 1997. URL: https://doi.org/10.1006/inco.1997.2620 , doi:10.1006/inco.1997.2620 .[14] Zvi Galil and Oded Margalit. All pairs shortest paths for graphs with smallinteger length edges. J. Comput. Syst. Sci. , 54(2):243–254, 1997. URL: https://doi.org/10.1006/jcss.1997.1385 , doi:10.1006/jcss.1997.1385 .[15] Fabrizio Grandoni, Giuseppe F Italian, Aleksander Łukasiewicz, Nikos Parotsidis, and PrzemysławUzna´nski. All-pairs lca in dags: Breaking through the o ( n . ) barrier. In Proceedings of the 2021ACM-SIAM Symposium on Discrete Algorithms (SODA) , pages 273–289. SIAM, 2021.[16] Karim Labib, Przemysław Uzna´nski, and Daniel Wolleb-Graf. Hamming distance completeness. In
Proceedings of the 30th Annual Symposium on Combinatorial Pattern Matching (CPM) , pages 14:1–14:17, 2019.[17] Franc¸ois Le Gall. Powers of tensors and fast matrix multiplication. In
Proceedings of the 39th Inter-national Symposium on Symbolic and Algebraic Computation (ISSAC) , pages 296–303, 2014.[18] Francois Le Gall and Florent Urrutia. Improved rectangular matrix multiplication using powers of theCoppersmith-Winograd tensor. In
Proceedings of the 29th Annual ACM-SIAM Symposium on DiscreteAlgorithms (SODA) , pages 1029–1046, 2018.[19] Andrea Lincoln, Adam Polak, and Virginia Vassilevska Williams. Monochromatic triangles, inter-mediate matrix products, and convolutions. In
Proceedings of the 11th Innovations in TheoreticalComputer Science Conference (ITCS) , pages 53:1–53:18, 2020.[20] Jiˇr´ı Matouˇsek. Computing dominances in E n . Inf. Process. Lett. , 38(5):277–278, 1991.1321] Ely Porat, Eduard Shahbazian, and Roei Tov. New parameterized algorithms for APSP in directedgraphs. In
Proc. 24th Annual European Symposium on Algorithms (ESA) , pages 72:1–72:13, 2016. doi:10.4230/LIPIcs.ESA.2016.72 .[22] Liam Roditty and Asaf Shapira. All-pairs shortest paths with a sublinear additive error. In
Proceedingsof the 35th International Colloquium on Automata, Languages and Programming (ICALP), Part I ,volume 5125 of
Lecture Notes in Computer Science , pages 622–633. Springer, 2008.[23] Raimund Seidel. On the all-pairs-shortest-path problem in unweighted undirected graphs.
J. Comput.Syst. Sci. , 51(3):400–403, 1995.[24] Avi Shoshan and Uri Zwick. All pairs shortest paths in undirected graphs with integer weights. In
Proceedings of the 40th Annual Symposium on Foundations of Computer Science (FOCS) , pages 605–614, 1999.[25] Jan van den Brand, Yin-Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, AaronSidford, Zhao Song, and Di Wang. Bipartite matching in nearly-linear time on moderately densegraphs. In , pages919–930. IEEE, 2020.[26] Virginia Vassilevska, Ryan Williams, and Raphael Yuster. Finding the smallest H -subgraph in realweighted graphs and related problems. In Proceedings of the 33rd International Colloquium on Au-tomata, Languages and Programming (ICALP), Part I , volume 4051 of
Lecture Notes in ComputerScience , pages 262–273, 2006.[27] Virginia Vassilevska, Ryan Williams, and Raphael Yuster. All-pairs bottleneck paths for general graphsin truly sub-cubic time. In
Proceedings of the 39th Annual ACM Symposium on Theory of Computing(STOC) , pages 585–589, 2007.[28] Virginia Vassilevska, Ryan Williams, and Raphael Yuster. Finding heaviest H -subgraphs in realweighted graphs, with applications. ACM Trans. Algorithms , 6(3):44:1–44:23, 2010.[29] Virginia Vassilevska Williams. Multiplying matrices faster than Coppersmith-Winograd. In
Proceed-ings of the 44th ACM Symposium on Theory of Computing (STOC) , pages 887–898, 2012.[30] Virginia Vassilevska Williams. Problem 2 on problem set 2 of CS367, October 15, 2015. URL: http://theory.stanford.edu/˜virgi/cs367/hw2.pdf .[31] Virginia Vassilevska Williams. On some fine-grained questions in algorithms and complexity. In
Proceedings of the International Congress of Mathematicians (ICM) , pages 3447–3487, 2018.[32] Virginia Vassilevska Williams and Yinzhan Xu. Truly subcubic min-plus product for less structuredmatrices, with applications. In
Proceedings of the 31st Annual ACM-SIAM Symposium on DiscreteAlgorithms (SODA) , pages 12–29, 2020.[33] R. Ryan Williams. Faster all-pairs shortest paths via circuit complexity.
SIAM J.Comput. , 47(5):1965–1985, 2018. URL: https://doi.org/10.1137/15M1024524 , doi:10.1137/15M1024524 . 1434] Raphael Yuster. Efficient algorithms on sets of permutations, dominance, and real-weighted APSP.In Proceedings of the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA) , pages950–957, 2009. URL: http://dl.acm.org/citation.cfm?id=1496770.1496873 .[35] Uri Zwick. All pairs lightest shortest paths. In
Proceedings of the 31st Annual ACM Symposium onTheory of Computing (STOC) , pages 61–69, 1999.[36] Uri Zwick. All pairs shortest paths using bridging sets and rectangular matrix multiplication.
J. ACM ,49(3):289–317, 2002.[37] Uri Zwick. personal communication, 2020.
A Deferred Equivalence and Hardness Proofs
A.1 Directed APSP and Rectangular Min-Plus
Here we prove Theorem 3.1 that reduces u-APSP to rectangular Min-Plus product.
Proof of Theorem 3.1.
Let ALG be an algorithm for M ⋆ ( n, n ρ , n | n − ρ ) in O ( n ρ − ε ) time for ε > .Let us consider the Min-Plus product running time in each stage of Zwick’s algorithm for each ℓ . Let’spick two parameters δ, δ ′ ∈ (0 , ρ ) . For any choice of ℓ such that (3 / ℓ ≤ n − ρ − δ for some δ > , wehave that M ⋆ ( n, n/ (3 / ℓ , n | (3 / ℓ ) is bounded from above by ˜ O ((3 / ℓ n ω (1 , − ℓ/ log / ( n ) , ) . Since n − ρ − δ / (3 / ℓ ≥ , we can bound n ω (1 , − ℓ/ log / ( n ) , ) from above by ( n − ρ − δ / (3 / ℓ ) n ω (1 ,ρ + δ, bysplitting the middle dimension of the matrices to n − ρ − δ / (3 / ℓ pieces and computing each piece inde-pendently. Thus, we get that M ⋆ ( n, n/ (3 / ℓ , n | (3 / ℓ ) is bounded from above (within polylogarithmicfactors) by (3 / ℓ n ω (1 , − ℓ/ log / ( n ) , ≤ (3 / ℓ ( n − ρ − δ / (3 / ℓ ) n ω (1 ,ρ + δ, = n − ρ − δ + ω (1 ,ρ + δ, ≤ n ρ − α for some α > , as − r + ω (1 , r, decreases monotonically as r increases within the interval [0 , .On the other hand, if we run Dijkstra’s algorithm to and from ˜ O ( n ρ − δ ′ ) nodes S and update all pairwisedistance estimates for paths going through S , this would take O ( n ρ − δ ′ ) time. This part essentially corre-sponds to a brute force computation of the n × ( n/ (3 / ℓ ) × n size Min-Plus products with entries up to (3 / ℓ for (3 / ℓ ≥ n − ρ + δ ′ .What remains is to perform n × ( n/ (3 / ℓ ) × n size Min-Plus products with entries up to (3 / ℓ for ℓ such that (3 / ℓ ∈ [ n − ρ − δ , n − ρ + δ ′ ] .For each ℓ such that n − ρ − δ ≤ (3 / ℓ ≤ n − ρ , the middle dimension of the Min-Plus product is ≤ n ρ + δ .Thus, we can split the middle dimension into n δ pieces of size n ρ and compute n δ Min-Plus products ofdimension n × n ρ × n with entries up to (3 / ℓ ≤ n − ρ . Computing these n δ Min-Plus products usingour assumed algorithm ALG takes O ( n δ · n ρ − ε ) time. If we set δ = cε > for some c ∈ (0 , , we get(within polylogs) O ( n ρ − (1 − c ) ε ) time for this step.Now let’s consider ℓ such that n − ρ + δ ′ ≥ (3 / ℓ ≥ n − ρ ; there are O (log n ) such ℓ . If we set N = n δ ′ / (1 − ρ ) , we get that N − ρ = n − ρ + δ ′ which is the maximum entry in any of the products needed tocompute. Also, as n < N and n ρ < N ρ , we get that what remains to compute are O (log N ) instances ofMin-Plus product of dimension at most N × N ρ × N with entries bounded by N − ρ .15et’s use algorithm ALG to compute these Min-Plus products. The running time of this last step of thereduction becomes within polylogs, n (1+ δ ′ / (1 − ρ ))(2+ ρ − ε ) = n ρ − ( ε − δ ′ (2+ ρ − ε ) / (1 − ρ )) . Let us set δ ′ = cε (1 − ρ ) / (2 + ρ − ε ) < ρ . Then the running time of this step becomes ˜ O ( n ρ − (1 − c ) ε ) .We get that APSP in directed graphs with bounded integer weights can be solved in time (within poly-logs) ˜ O ( n ρ − α + n ρ − (1 − c ) ε ) for any c ∈ (0 , . (cid:3) A.2 u-APLP in DAGs and Rectangular Min-Plus
Proof of Theorem 3.4.
A reduction from u-APLP in DAGs to [ ± c ] -APLP in DAGs is trivial, since theformer is a special case of the latter.To show the reduction from [ ± c ] -APLP in DAGs to M ⋆ ( n, n ρ , n | n − ρ ) , we first negate the weights ofthe APLP instance, so the problem becomes APSP. Then we apply Theorem 3.1 to complete the reduction.It suffices to provide a reduction from M ⋆ ( n, n ρ , n | n − ρ ) to u-APLP in DAGs.Suppose we are given an n × n ρ matrix A and an n ρ × n matrix B where both matrices have positiveinteger entries bounded by n − ρ . First, we create matrices ¯ A = n − ρ + 1 − A and ¯ B = n − ρ + 1 − B . If wecompute the Max-Plus product of ¯ A and ¯ B , we can get the Min-Plus product of A and B by min p { A [ i, p ] + B [ p, j ] } = 2 + 2 n − ρ − max p { ¯ A [ i, p ] + ¯ B [ p, j ] } .Next, we create the same graph from the proof of Theorem 3.2 on matrices ¯ A and ¯ B . The created graphis clearly a DAG. We see that now from the APLP of the created graph, we can compute the Max-Plusproduct of ¯ A and ¯ B , and thus completing the reduction. (cid:3) A.3 u- c Red-APSP, APLSP and Vertex Weighted APSP and Rectangular Min-Plus
Proof of Theorem 3.5.
We first show the reduction from u- c Red-APSP to M ⋆ ( n, n ρ , n | n − ρ ) . SinceM ⋆ ( n, n ρ , n | n − ρ ) is sub- n ρ fine-grained equivalent to u-APSP by Corolloary 3.3, it suffices to show areduction from u- c Red-APSP to u-APSP.Given an undirected graph G = ( V, R ∪ B ) , where R is the set of red edges and B is the set of blueedges. We create a directed graph G ′ as follows. We copy V to ( c + 1) parts V , . . . , V c , where a vertex v ∈ V is copied to ( c + 1) vertices v ∈ V , . . . , v c ∈ V c . For each blue edge e = { u, v } ∈ B , weadd directed edges ( u i , v i ) , ( v i , u i ) for each i ∈ { , . . . , c } . For each red edge e = { u, v } ∈ R , we adddirected edges ( u i , v i +1 ) , ( v i , u i +1 ) for i ∈ { , . . . , c − } . Now the distance from u to v i in G ′ is exactlythe shortest path distance between u and v that uses exactly i red edges in G . Therefore, we can use analgorithm for unweighted directed APSP on G ′ to compute the pairwise distances in G ′ , then the shortestpath length between u and v that uses at most c red edges is min ci =0 D G ′ [ u , v i ] .Now we show the reduction in the other direction. Note that we can reduce u- Red-APSP to u- c Red-APSP for any c > by attaching a length c − red path u c − − u c − − · · · u − u to any vertex u in thegraph. Then the shortest path distance from u c − to v using at most c red edges in this new graph is exactlythe shortest distance from u to v using at most red edges in the original graph. Thus, it suffices to showthe reduction to u- Red-APSP.This reduction is a modification of the reduction in Theorem 3.2. Let A be an n × n ρ matrix and let B be an n ρ × n matrix, both with entries in { , . . . , n − ρ } .Let I be a set of n vertices, representing the rows of A . Let J be a set of n vertices, representing thecolumns of B . For every k ∈ [ n ρ ] that corresponds to a column of A or row of B , we create an undirectedpath on n − ρ vertices, where all edges are blue: 16 ( k ) := x k,n − ρ − x k,n − ρ − − . . . − x k, − y k, − y k, − . . . − y k,n − ρ . Finally, for every i ∈ [ n ] and k ∈ [ n ρ ] , we add a red edge between i ∈ I and x k,A [ i,k ] . For every j ∈ n and k ∈ [ n ρ ] , we add a red edge between j ∈ J and y k,B [ k,j ] . I Ji jp x p, A i,p B p,j Figure 3: Sketch of the reduction in the proof of Theorem 3.5. For each vertex i and path p , we add a rededge between i and a vertex on the path p whose distance to the middle point x p, on the path is A [ i, p ] . Foreach path p and vertex j , we add a red edge from a vertex on the path whose distance from the middle point x p, on the path is B p,j to vertex j .Consider any path from i ∈ I to j ∈ J that uses at most red edges. It must first go to some X ( k ) using one red edge, going rightwards on X ( k ) using several blue edges, and finally use another red edge togo to j . The length of such a path is A [ i, k ] + B [ k, j ] . Therefore, the shortest path between i and j thatuses at most two red edges have length exactly k { A [ i, k ] + B [ k, j ] } , so calling the u- Red-APSPalgorithm solves the Min-Plus product instance. (cid:3)
Proof of Corollary 3.6.
We consider the reduction from M ⋆ ( n, n ρ , n | n − ρ ) to u- Red-APSP in the proofof Theorem 3.5. In that reduction, we can replace all red edges with edges of weight , and all blue edgeswith edges of weight . Then the shortest distance from any vertex i ∈ I to any vertex j ∈ J is . Also, thelightest shortest path contains k { A [ i, k ] + B [ k, j ] } edges. Thus, computing APLSP gives the resultof of an M ⋆ ( n, n ρ , n | n − ρ ) instance. (cid:3) Proof of Corollary 3.7.
Consider the reduction from M ⋆ ( n, n ρ , n | n − ρ ) to u- Red-APSP. First, we re-move the colors of all edges. For vertices on the paths X ( p ) for p ∈ [ n ρ ] , we set their weights to . Forvertices in I and J , we set their weight to n − ρ . This way, the shortest path from i ∈ I to j ∈ J won’t visitany other i ′ or j ′ , so our reduction still follows. (cid:3) A.4 u- mod U APSP and u- ≤ U APSP are Hard from Rectangular Min-Plus
Here we show the conditional hardness for u- mod U APSP for any U ≥ . The proof for u- ≤ U APSPis similar.
Proof of Theorem 3.8.
First, we define Unique Min-Plus product, where an algorithm is given
A, B , and isasked to compute arg min k { A [ i, k ] + B [ k, j ] } . The algorithm only has to be correct on i, j where thereexists a unique k that obtains the minimum value; for other i, j pairs, the algorithm is allowed to output anynumber in [ n ρ ] . 17e first reduce M ⋆ ( n, n ρ , n | n − ρ ) to Unique Min-Plus product of matrices with the same dimensionsand entry bounds.We perform ⌈ log( n ρ ) ⌉ + 1 stages, one for each integer ≤ t ≤ ⌈ log( n ρ ) ⌉ . During each stage t ,we will repeat the following for Θ(log n ) rounds. Let K be the set of column indices of A or the set ofrow indices of B . We independently keep each k ∈ K with probability t . Thus, we get a submatrix A ′ of A and a submatrix B ′ of B . We use an algorithm for Unique Min-Plus product to compute k ′ [ i, j ] =arg min k { A ′ [ i, k ] + B ′ [ k, j ] } , and use A ′ [ i, k ′ [ i, j ]] + B ′ [ k ′ [ i, j ] , j ] to update our answer for the ( i, j ) -thentry of the Min-Plus product of A and B .This reduction is correct because when the number of k that achieves minimum value of A [ i, k ]+ B [ k, j ] is in [2 t , t +1 ] , we have a constant probability to keep one unique k if we independently keep every k ∈ K with probability t . Thus, by repeating the procedure Θ(log n ) times, we can keep one unique such k in oneof the rounds with high probability.Then we show a reduction from Unique Min-Plus product to Counting Min-Plus product modulo U ,where an algorithm needs to compute for every i, j , the number of k modulo U such that A [ i, k ] + B [ k, j ] =min k { A [ i, k ] + B [ k, j ] } .For each integer ≤ t ≤ ⌈ log( n ρ ) ⌉ , we do the following. For all k ∈ [ n ρ ] , if the t -th bit in its binaryrepresentation is , we duplicate the k -th column of A and the k -th row of B . Then we use an algorithmfor Counting Min-Plus product modulo U . Suppose there is a unique k achieving the minimum value of A [ i, k ] + B [ k, j ] for ( i, j ) , then the count would be U if the t -bit in k -th binary representation is ;otherwise, the count would be U (Note that this works even for U = 2 since the number of witnessesis always at least 1). Thus, we will be able to completely recover this k for ( i, j ) if it is unique.Finally, we reduce Counting Min-Plus product modulo U to u- mod U APSP. Note that the Min-Plusproduct here has two matrices that have dimensions n × n ρ and n ρ × n respectively and have entries boundedby n − ρ . Thus, we can apply the same reduction from the proof of Theorem 3.2. The number of shortestpaths from i to j in that reduction is exactly the number of k that A [ i, k ]+ B [ k, j ] = min k { A [ i, k ]+ B [ k, j ] } . (cid:3) A.5 Min Witness Equality is Hard from Rectangular Min-Plus
Proof of Theorem 3.10.
Suppose we are given integer matrices
A, B , where A has dimension n × n ρ , and B has dimension n ρ × n , and both matrices have entries bounded by n − ρ . We will transform this instanceinto a Min Witness Equality instance.Let A ′ be an n × n matrix. We index the column indices of A ′ by V × K , where V = [2 n − ρ ] and K represents the column indices of A . Similarly, let B ′ be a n × n matrix, where the row indices of B isalso V × K . We set A ′ [ i, ( v, k )] = A [ i, k ] and B ′ [( v, k ) , j ] = v − B [ k, j ] . We can easily pad A ′ and B ′ to n × n square matrices by adding empty rows to A and empty columns to B .Suppose A ′ [ i, ( v, k )] = B ′ [( v, k ) , j ] , then we will have A [ i, k ] = v − B [ k, j ] , which implies A [ i, k ] + B [ k, j ] = v . Similarly, if A [ i, k ] + B [ k, j ] = v , we would have A ′ [ i, ( v, k )] = B ′ [( v, k ) , j ] . Thus, theminimum value of v such that A ′ [ i, ( v, k )] = B ′ [( v, k ) , j ] for some k is the ( i, j ) -th entry of the Min-Plusproduct of A and B . Therefore, if we order V × K by ordering V as the primary key, the Min WitnessEquality of A ′ and B ′ can easily be used to compute A ⋆ B in O ( n ) time.Thus, if we can compute the Min Witness Equality of A ′ and B ′ in O ( n ρ − ǫ ) time for ǫ > , we canalso compute A ⋆ B in O ( n ρ − ǫ ) time. (cid:3) Algorithms for All-Pairs Lightest Shortest Paths
In this section, we describe algorithms for the following problem, which includes both All-Pairs LightestShortest Paths (APLSP) and Shortest Lightest Paths (APSLP) as special cases:
Problem B.1. (Lex -APSP) We are given a graph G = ( V, E ) with n vertices, where each edge ( u, v ) ∈ E has a “primary” weight w ( u, v ) and a “secondary” weight w ( u, v ) . For every pair of vertices u, v ∈ V ,we want to find a path π from u to v that minimizes ( P e ∈ π w ( e ) , P e ∈ π w ( e )) lexicographically. Let D [ u, v ] be the lexicographical minimum of ( P e ∈ π w ( e ) , P e ∈ π w ( e )) . Let D [ u, v ] be the min-imum of P e ∈ π w ( e ) (the shortest-path distance) and let D [ u, v ] be the second coordinate of D [ u, v ] .APLSP corresponds to the case when all secondary edge weights are 1, whereas APSLP corresponds to thecase when all primary edge weights are 1.The following lemma, which will be important in the analysis of our Lex -APSP algorithm, bounds thecomplexity of Min-Plus product of an n × n matrix A and an n × n matrix B in the case when thefinite entries of A come from a small range [ ℓ ] (but the finite entries of B may come from a large range [ ℓ ] ). The bound can be made sensitive to the number m of finite entries of B and the number m ofoutput entries we want. The lemma is a variant of [7, Theorem 3.5] (the basic approach originates fromMatouˇsek’s dominance algorithm [20], but this variant requires some extra ideas). It also generalizes andimproves (using rectangular matrix multiplication) Theorem 1.2 in [32]. Lemma B.2. M ⋆ ( n , n , n | ℓ , ℓ ) = ˜ O (cid:16) min t ( M ⋆ ( n , n , n n /t | ℓ ) + tn n ) (cid:17) . More generally, M ⋆ ( n , n , n | m , m , m | ℓ , ℓ ) = ˜ O (cid:16) min t ( M ⋆ ( n , n , m /t | ℓ ) + tm ) (cid:17) . Proof.
Divide each column of B into groups of t entries by rank: the first group contains the t smallestelements, the second group contains the next t smallest, etc. (ties in ranks can be broken arbitrarily). Eachcolumn may have at most t leftover entries. The total number of groups is at most m /t .For each i ∈ [ n ] and j ′ ∈ [ m /t ] , let C [ i, j ′ ] be true iff there exists k ∈ [ n ] such that A [ i, k ] < ∞ andgroup j ′ contains an element with row index k . Computing C reduces to taking a Boolean matrix productand has cost O ( M ( n , n , m /t )) .For each i ∈ [ n ] and j ′ ∈ [ m /t ] , suppose that group j ′ is part of column j and the maximum elementin group j ′ is x ; let b C [ i, j ′ ] = min k : B [ k,j ] ∈ [ x,x + ℓ ] ( A [ i, k ] + B [ k, j ]) . Since entries in A are from the range [ ℓ ] ∪ {∞} , and we only keep a size ℓ + 1 range of values for matrix B , computing b C reduces to taking aMin-Plus product with entries in [ ℓ ] (after shifting) and has cost O ( M ⋆ ( n , n , m /t | ℓ )) .To compute the output entry at each of the m positions ( i, j ) , we find the group j ′ in column j withthe smallest rank such that C [ i, j ′ ] is true. Let x be the maximum element in group j ′ . The answer min k ( A [ i, k ] + B [ k, j ]) is at most x + ℓ . Thus, the answer is defined by an index k that (i) corresponds toan element in group j ′ , or (ii) corresponds to a leftover element in column j , or (iii) has B [ k, j ] ∈ [ x, x + ℓ ] .Cases (i) and (ii) can be handled by linear search in O ( t ) time; case (iii) is handled by looking up b C [ i, j ′ ] .The total time to compute m output entries is O ( tm ) . (cid:3) B.1 [ c ] -Lex -APSP Let c = ˜ O (1) . For directed graphs, Zwick [35] presented a variant of his u-APSP algorithm thatsolves [ c ] -Lex -APSP (and thus [ c ] -APLSP and [ c ] -ALPSP) in time ˜ O (max ℓ M ⋆ ( n, n/ℓ, n | ℓ )) ≤ ˜ O (min L ( L M ( n, n/L, n ) + n /L )) . This is O ( n . ) by the current bounds on rectangular matrix multi-plication [18] (and is ˜ O ( n / ) if ω = 2 ). 19han [7] gave a faster algorithm for ([ c ] − { } ) -Lex -APSP (and in fact a special case of Vertex-Weighted APSP that includes ([ c ] − { } ) -Lex k -APSP for an arbitrary constant k ) in time ˜ O ( n (3+ ω ) / ) ,which is O ( n . ) by the current matrix multiplication exponent (and is ˜ O ( n . ) if ω = 2 ). Zwick’salgorithm works even when zero primary weights are allowed, but Chan’s algorithm does not (part of thedifficulty is that the secondary distance of a path may be much larger than the primary distance). A moregeneral version of Chan’s algorithm [7] can handle zero primary weights (and [ c ] -Lex k -APSP for constant k ) but has a worse time bound of ˜ O ( n (9+ ω ) / ) , which can be slightly reduced using rectangular matrixmutiplication [34].We describe an O ( n . ) -time algorithm to solve [ c ] -Lex -APSP for directed graphs, which can han-dle zero weights and is faster than Zwick’s O ( n . ) -time algorithm; it is also slightly faster than Chan’salgorithm. The algorithm uses rectangular matrix multiplication (without which the running time would be ˜ O ( n ( ω +3) / ) ). It should be noted that Chan’s previous algorithm can’t be easily sped up using rectangularmatrix multiplication, besides being inapplicable when there are zero primary weights. Overview.
The new algorithm can be viewed as an interesting variant of Zwick’s u-APSP algorithm [36].Zwick’s algorithm uses rectangular Min-Plus products of dimensions around n × n/ℓ and n/ℓ × n , in ge-ometrically increasing parameter ℓ . Our algorithm proceeds in two phases. In both phases, we use therectangular products of dimensions around n/ℓ × n/ℓ and n/ℓ × n . In the first phase, we consider ℓ inincreasing order; in the second, we consider ℓ in decreasing order. In these Min-Plus products, entries of thefirst matrix in each product come from a small range; this enables us to use Lemma B.2. Preliminaries.
Let L be a parameter to be set later. Let λ [ u, v ] denote the length of a lexicographicalshortest path between u and v . In this section, the length of a path refers to the number of edges in the path.For every ℓ that is a power of 3/2, as in Section 4, let R ℓ ⊆ V be a subset of ˜ O ( n/ℓ ) vertices that hits allshortest paths of length ℓ/ [35, 36]. We may assume that R (3 / i ⊇ R (3 / i +1 (as before). Set R = V .For S , S ⊆ V , let D ( S , S ) denote the submatrix of D containing the entries for ( u, v ) ∈ S × S . Phase 1.
We first solve the following subproblem for a given ℓ ≤ L : compute D [ u, v ] for all ( u, v ) ∈ R ℓ × V with λ [ u, v ] ≤ ℓ , and similarly for all ( u, v ) ∈ V × R ℓ with λ [ u, v ] ≤ ℓ . (We don’t know λ [ u, v ] in advance. More precisely, if λ [ u, v ] ≤ ℓ , the computed value should be correct; otherwise, the computedvalue is only guaranteed to be an upper bound.)Suppose we have already computed D [ u, v ] for all ( u, v ) ∈ R ℓ/ × V with λ [ u, v ] ≤ ℓ/ , and similarlyfor all ( u, v ) ∈ V × R ℓ/ with λ [ u, v ] ≤ ℓ/ .We take the Min-Plus product D ( R ℓ , R ℓ/ ) ⋆ D ( R ℓ/ , V ) (where elements are compared lexicograph-ically). For each ( u, v ) ∈ R ℓ × V , if its output entry is smaller than the current value of D [ u, v ] , we reset D [ u, v ] to the smaller value. Similarly, we take the Min-Plus product D ( V, R ℓ/ ) ⋆ D ( R ℓ/ , R ℓ ) . For each ( u, v ) ∈ V × R ℓ , if its output entry is smaller than the current value of D [ u, v ] , we reset D [ u, v ] to thesmaller value. We reset all entries greater than c ℓ to ∞ .To justify correctness, observe that for any shortest path π of length between ℓ/ and ℓ , the middle (2 ℓ/ / ℓ/ vertices must contain a vertex of R ℓ/ , which splits π into two subpaths each of length atmost ℓ/ ℓ/ ≤ ℓ/ .To take the product, we map each entry D [ u, v ] of D ( R ℓ/ , V ) to a number D [ u, v ] · c ℓ + D [ u, v ] ∈ [ ˜ O ( ℓ )] . It is more efficient to break the product into ℓ separate products, by putting entries of D ( R ℓ , R ℓ/ ) with a common D value into one matrix. Then after shifting, the finite entries of each such matrix arein [ ˜ O ( ℓ )] . (The entries of D ( R ℓ/ , V ) are still in [ ˜ O ( ℓ )] .) Hence, the computation takes time ˜ O ( ℓ ·M ⋆ ( n/ℓ, n/ℓ, n | ℓ, ℓ )) .We do the above for all ℓ ≤ L that are powers of / (in increasing order).20 hase 2. Next we solve the following subproblem for a given ℓ ≤ L : compute D [ u, v ] for all ( u, v ) ∈ R ℓ/ × V with λ [ u, v ] ≤ L .Suppose we have already computed D [ u, v ] for all ( u, v ) ∈ R ℓ × V with λ [ u, v ] ≤ L .We take the Min-Plus product D ( R ℓ/ , R ℓ ) ⋆ D ( R ℓ , V ) , keeping only entries bounded by ˜ O ( ℓ ) in thefirst matrix and ˜ O ( L ) in the second matrix. For each ( u, v ) ∈ V × R ℓ , if its output entry is smaller than thecurrent value of D [ u, v ] , we reset D [ u, v ] to the smaller value.To justify correctness, recall that for ( u, v ) ∈ R ℓ/ × V , if λ [ u, v ] < ℓ/ , then D [ u, v ] is alreadycomputed in Phase 1. On the other hand, in any shortest path π of length between ℓ/ and L , the first ℓ/ vertices of the path must contain a vertex of R ℓ .To take the product, we map each entry D [ u, v ] of D ( R ℓ/ , V ) to a number D [ u, v ] · c L + D [ u, v ] ∈ [ ˜ O ( ℓL )] . As before, it is better to perform ℓ separate products, by putting entries of D ( R ℓ/ , R ℓ ) with acommon D value into one matrix. Then after shifting, the finite entries of each such matrix are in [ ˜ O ( ℓ )] .(The entries of D ( R ℓ/ , V ) are still in [ ˜ O ( ℓL )] .) Hence, the computation takes time ˜ O ( ℓ · M ⋆ ( n/ℓ, n/ℓ, n | ℓ, ℓL )) .We do the above for all ℓ ≤ L that are powers of / (in decreasing order). Last step.
By the end of Phase 2 (when ℓ reaches 1), we have computed D [ u, v ] for all ( u, v ) with λ [ u, v ] ≤ L . To finish, we compute D [ u, v ] for all ( u, v ) with λ [ u, v ] > L , as follows:We run Dijkstra’s algorithm O ( | R L | ) times to compute D [ u, v ] for all ( u, v ) ∈ R L × V and for all ( u, v ) ∈ V × R L . This takes O ( | R L | n ) = ˜ O ( n /L ) time. We then compute D ( V, R L ) ⋆ D ( R L , V ) bybrute force in O ( | R L | n ) = ˜ O ( n /L ) time.Correctness follows since every shortest path of length more than L must pass through a vertex in R L .As before, standard techniques for generating witnesses for matrix products can be applied to recoverthe shortest paths [12, 36]. Total time.
The cost of Phase 2 dominates the cost of Phase 1. By Lemma B.2, the total cost is ˜ O (cid:18) max ℓ ≤ L ℓ · M ⋆ ( n/ℓ, n/ℓ, n | ℓ, ℓL ) + n /L (cid:19) ≤ ˜ O (cid:18) max ℓ ≤ L ℓ · min t (cid:0) M ⋆ ( n/ℓ, n/ℓ, n / ( ℓt ) | ℓ ) + tn /ℓ (cid:1) + n /L (cid:19) . We set t = n/L and obtain ˜ O (max ℓ ≤ L ℓ · M ( n/ℓ, n/ℓ, Ln/ℓ ) + n /L ) . Intuitively, the maximum occurs when ℓ = 1 , and so we should choose L to minimize ˜ O ( M ( n, n, Ln ) + n /L ) . With the current bounds on rectangular matrix multiplication [18], we choose L = n . and getrunning time O ( n . ) . (Formally, we can verify this time bound using the convexity of the function x + ω (1 − x, − x, . − x ) .) Theorem B.3. [ c ] -Lex -APSP (and thus [ c ] -APLSP and [ c ] -APSLP) can be solved in O ( n . ) time forany c = ˜ O (1) . Remarks.
Without rectangular matrix multiplication, the above still gives a time bound of ˜ O ( Ln ω + n /L ) , yielding ˜ O ( n (3+ ω ) / ) .The same algorithm works even with negative weights (i.e., for [ ± c ] -Lex -APSP), like Zwick’s previ-ous algorithm [35], assuming no negative cycles.In Appendix D.1, we describe an alternative algorithm that has the same running time, though it doesnot allow zero primary edge weights (or negative weights).21 .2 Undirected ([ c ] − { } ) -Lex -APSP A natural question is whether APLSP or APSLP is easier for undirected graphs. We now describe afaster O ( n . ) -time algorithm for ([ c ] − { } ) -Lex -APSP for undirected graphs. Zero primary weightsare not allowed, but zero secondary weights are. (In particular, the algorithm can solve [ c ] -APSLP, whenall primary weights are 1.) Overview.
We follow an idea of Aingworth et al. [1], to divide into two cases: when the source vertexhas high degree or low degree. For high-degree vertices, there exists a small dominating set, and so thesevertices can be covered by a small number of “clusters”; sources in the same cluster are close together, andso distances from one fixed source give us good approximation to distances from other sources in the samecluster, by the triangle inequality (since the graph is undirected). On the other hand, for low-degree vertices,the relevant subgraph is sparse, which enables faster algorithms. Originally, Aingworth et al.’s approachwas intended for the design of approximation algorithms (with O (1) additive error for unweighted graphs).We will adapt it to find exact shortest paths. (Chan [8] previously had also applied Aingworth et al.’sapproach to exact APSP, but the goal there was in logarithmic-factor speedup, which was quite different.)In order to handle the high-degree case for Lex -APSP, we need further ideas to use approximate primaryshortest-path distances to compute exact lexicographical shortest-path distances; in particular, we will needMin-Plus products on secondary distances (as revealed in the proof of Lemma B.4 below). The combinationof Aingworth et al.’s approach with matrix multiplication appears new, and interesting in our opinion. Preliminaries.
We first compute D [ u, v ] for all ( u, v ) by running a known [ c ] -APSP algorithm onthe primary distances in O ( n ω ) time [3, 23].Assume that we have already computed D [ u, v ] for all ( u, v ) with D [ u, v ] ≤ ℓ/ for a given ℓ . Wewant to compute D [ u, v ] for all ( u, v ) with D [ u, v ] ≤ ℓ .Define D ( ℓ )2 [ u, v ] = D [ u, v ] if D [ u, v ] = ℓ , and D ( ℓ )2 [ u, v ] = ∞ otherwise. For subsets S , S ⊆ V ,let D ( ℓ )2 ( S , S ) denote the submatrix of D ( ℓ )2 containing the entries for ( u, v ) ∈ S × S . Lemma B.4.
Let G = ( V, E ) be an undirected graph with edge weights in [ c ] − { } . Assume that wehave already computed D [ u, v ] for all ( u, v ) with D [ u, v ] ≤ ℓ/ . Given a set S of vertices that arewithin primary distance c = ˜ O (1) from each other, we can compute D [ u, v ] for all u ∈ S and v ∈ V with D [ u, v ] ≤ ℓ in O ( M ⋆ ( | S | , n/ℓ, n | ℓ )) total time.Proof. Fix s ∈ S . Let V i = { v ∈ V : D [ s, v ] ∈ i ± c } . Note that P i | V i | = ˜ O ( n ) . Also note that if u ∈ S and D [ u, v ] = i , then we must have v ∈ V i (by the triangle inequality, because the graph is undirected).Pick an index m ∈ [0 . ℓ, . ℓ ] with | V m − c ∪ · · · ∪ V m | = ˜ O ( n/ℓ ) .For i ≤ m , we have already computed D ( i )2 ( S, V i ) .For i = m + 1 , . . . , ℓ , we will compute D ( i )2 ( S, V i ) as follows: For each ∆ ∈ [ c ] , we take the Min-Plusproduct D ( m − ∆)2 ( S, V m − ∆ ) ⋆ D ( i − m +∆)2 ( V m − ∆ , V i ) . Note that D ( i − m +∆)2 ( V m − ∆ , V i ) is already known,since i − m + ∆ < ℓ/ . We take the minimum over all ∆ ∈ [ c ] for those ( u, v ) ∈ S × V i with D [ u, v ] = i .Instead of doing the product individually for each i , it is more efficient to combine all the matrices D ( i − m +∆)2 ( V m − ∆ , V i ) over all i > m . This gives a single matrix (per ∆ ) with | V m − ∆ | = ˜ O ( n/ℓ ) rows and P i>m | V i | = ˜ O ( n ) columns. So, the entire product can be computed in O ( M ⋆ ( | S | , n/ℓ, n | ℓ )) time. (cid:3) Let L be a parameter to be set later. Let V high be the set of all vertices of degree more than n/L , and V low be the set of all vertices of degree at most n/L . 22 hase 1. We will first compute D [ u, v ] for all u ∈ V high and v ∈ V with D [ u, v ] ≤ ℓ , as follows:Let X ⊆ V be a dominating set for V high of size ˜ O ( L ) , such that every vertex in V high is in the (closed)neighborhood of some vertex in X . Such a dominating set can be constructed (for example, by the standardgreedy algorithm) in ˜ O ( n ) time [1].Let X = { x , x , . . . , x ˜ O ( L ) } . For each x i ∈ X , we divide N ( x i ) \ (cid:16)S j
Next, for each u ∈ V low , we will compute D [ u, v ] for all v ∈ V with D [ u, v ] ≤ ℓ , as follows:Define a graph G u containing all edges ( x, y ) with x ∈ V low or y ∈ V low ; for each z ∈ V high , we addan extra edge ( u, z ) with weight D [ u, z ] , which has been computed in Phase 1. Then the lexicographicalshortest-path distance from u to v in G u matches the lexicographical shortest-path distance in G , because if h u , . . . , u k i is a lexicographical shortest path in G with u = u , and i is the largest index with u i ∈ V high (set i = 1 if none exists), then h u , u i , . . . , u k i is a path in G u . We run Dijkstra’s algorithm on G u from thesource u . Since G u has O ( n /L ) edges, this takes ˜ O ( n /L ) time per u . The total over all u is ˜ O ( n /L ) .As before, standard techniques for generating witnesses for matrix products can be applied to recoverthe shortest paths [12, 36]. Total time.
We do the above for all ℓ ’s that are powers of / . The overall cost is ˜ O (cid:18) max ℓ L · M ⋆ ( n/L, n/ℓ, n | ℓ ) + n /L (cid:19) ≤ ˜ O (cid:18) max ℓ L · min (cid:8) n / ( Lℓ ) , ℓ · M ( n/L, n/ℓ, n ) (cid:9) + n /L (cid:19) = ˜ O (cid:18) max ℓ ≤ L Lℓ · M ( n/L, n/ℓ, n ) + n /L (cid:19) = ˜ O ( L · M ( n/L, n/L, n ) + n /L ) . With the current bounds on rectangular matrix multiplication, we choose L = n . and get runningtime O ( n . ) . Theorem B.5. ([ c ] − { } ) -Lex -APSP (and thus -APLSP and -APSLP) for undirected graphs can be solvedin O ( n . ) time for any c = ˜ O (1) . Remarks.
Without rectangular matrix multiplication, the above still gives a time bound of ˜ O ( L ( n/L ) ω + n /L ) , yielding ˜ O ( n / (4 − ω ) ) .One could adapt the algorithm to solve Undirected ([ c ] − { } ) -Lex k -APSP for a larger constant k , butthe running time appears worse than the bound ˜ O ( n (3+ ω ) / ) by Chan [7] (because of the need to compute aMin-Plus product between matrices with larger entries in Lemma B.4). C Algorithms for APSP Counting
In this section, we describe algorithms for the following problem:
Problem C.1. (
APSP)
Given a graph G = ( V, E ) , we want to count the number C [ u, v ] of shortest pathsfrom u to v for every pair of vertices u, v ∈ V . Betweenness Centrality [4, 5], a problem that, given a graphand a vertex v , asks to compute the quantity P s,t ∈ V \{ v } C v [ s, t ] /C [ s, t ] , where C v [ s, t ] is the number ofshortest paths from s to t that go through v (i.e., C v [ s, t ] = C [ s, v ] · C [ v, t ] if D [ s, t ] = D [ s, v ] + D [ v, t ] ,and C v [ s, t ] = 0 otherwise). The known algorithms for this problem in m -edge, n -node graphs are said torun in ˜ O ( mn ) time [4], however these algorithms seem to work in a model where arbitrarily sized integerscan be added in constant time. Note that the counts could be exponentially large in the worst case, and henceif one takes the size of the integers into account, the running time of all prior algorithms is actually ˜ O ( mn ) (and there are simple examples on which it actually runs in Ω( mn ) time for any choice of m ).We will consider several variants of the APSP problem: ≤ U APSP (computing min { C [ u, v ] , U } fora given value U ), mod U APSP (computing C [ u, v ] mod U ), approx- U APSP (computing a (1 + 1 /U ) -factorapproximation), and the original exact version. Note that ≤ U APSP reduces to approx- O ( U ) APSP easily,and ≤ U APSP also reduces to mod ˜ O ( nU ) APSP (by using random moduli in e Θ( nU ) ).Let D [ u, v ] denote the shortest-path distance from u to v . C.1 u- ≤ U APSP
We solve u- ≤ U APSP for directed graphs by a variant of Zwick’s u-APSP algorithm [36].Given a pair ( A, A ′ ) of n × n matrices and a pair ( B, B ′ ) of n × n matrices B , define a newpair of n × n matrices ( C, C ′ ) = ( A, A ′ ) • ( B, B ′ ) , where C [ i, j ] = min k { A [ i, k ] + B [ k, j ] } and C ′ [ i, j ] = P k : A [ i,k ]+ B [ k,j ]= C [ i,j ] A ′ [ i, k ] B ′ [ k, j ] . Assuming that the finite entries of A and B are in [ ℓ ] andthe entries of A ′ and B ′ are in [ U ] , we can reduce this “funny” product ( C, C ′ ) to a standard matrix productby mapping ( A [ i, j ] , A ′ [ i, j ]) to A ′ [ i, j ] · M A [ i,j ] and ( B [ i, j ] , B ′ [ i, j ]) to B ′ [ i, j ] · M B [ i,j ] , for a sufficientlylarge M = poly ( n , U ) . Thus, the computation time is ˜ O ( ℓ · M ( n , n , n )) , ignoring log( n n n U ) factors. Alternatively, the funny product can be computed trivially in ˜ O ( n n n ) time.Let Π contain up to U shortest paths for every pair of vertices. (This collection Π of paths is not givento us but helps in the analysis.) For every ℓ that is a power of 2, let R ℓ ⊆ V be a random subset of c ( n/ℓ ) log( nU ) vertices for a sufficiently large constant c . With high probability, we have the followingproperties (the second of which follows from a Chernoff bound): (i) every subpath of a path in Π of length ℓ/ contains a vertex of R ℓ , and (ii) every path in Π of length at most ℓ contains at most b = Θ(log( nU )) vertices of R ℓ . We may assume that R i ⊇ R i +1 . Take R n = ∅ .We first compute D [ u, v ] for all u, v ∈ V by running Zwick’s u-APSP algorithm.Let C ℓ ′ [ u, v ] denote the number of shortest paths from u to v (of distance D [ u, v ] ) where all intermediatevertices are in V − R ℓ ′ ; it is set to U if the number exceeds the cap U . Ultimately, we want C n [ u, v ] .Suppose we have already computed the counts C ℓ ′ [ u, v ] for all u, v ∈ V with D [ u, v ] ≤ ℓ/ , for all ℓ ′ ’sthat are powers of 2. We want to compute the counts C ℓ ′ [ u, v ] for all u, v ∈ V with D [ u, v ] ∈ ( ℓ/ , ℓ ] forall ℓ ′ ’s that are powers of 2.For subsets S , S ⊆ V , let D ( S , S ) and C ℓ ′ ( S , S ) denote the submatrix of D and C ℓ ′ respectivelycontaining the entries for ( u, v ) ∈ S × S . Let b C ℓ ′ ( S , S ) denote the matrix pair ( D ( S , S ) , C ℓ ′ ( S , S )) .For each j = 0 , . . . , b , for ℓ ′ > ℓ , we compute the funny product b C ℓ ( V, R ℓ − R ℓ ′ ) • b C ℓ ( R ℓ − R ℓ ′ , R ℓ − R ℓ ′ ) j • b C ℓ ( R ℓ − R ℓ ′ , V ) (the j -th power here is with respect to the operator • ). The counts we want are theentries of the second matrix in the output for all ( u, v ) with D [ u, v ] ∈ ( ℓ/ , ℓ ] , summed over all j . Whencomputing the product, we can place a cap of U in all intermediate second matrices. The running time is ˜ O (min { ℓ · M ( n, n/ℓ, n ) , n /ℓ } ) (since j is polylogarithmic). For ℓ ′ ≤ ℓ , the counts are set to 0.To justify correctness, consider a shortest path π in Π of length in ( ℓ/ , ℓ ] where all intermediate verticesare from V − R ℓ ′ . Then π must pass through at least one vertex of R ℓ (this implies that ℓ ′ > ℓ ) and at most b vertices of R ℓ ; furthermore, any subpath π ′ between two consecutive vertices in R ℓ , or between the start24ertex and the first vertex in R ℓ , or between the last vertex in R ℓ and the last vertex, have at most ℓ/ vertices. Note that π ′ has all intermediate vertices from V − R ℓ ′ − R ℓ = V − R ℓ . Thus, each such path π ′ will be counted by one of the above products exactly once.We do the above for all ℓ ’s that are powers of 2. The total time is ˜ O (max ℓ min { ℓ · M ( n, n/ℓ, n ) , n /ℓ } ) ,which can be bounded above by ˜ O ( L · M ( n, n/L, n ) + n /L ) for any choice of L . This bound is similarto that of Zwick’s u-APSP algorithm. We set L = n . . Theorem C.2. u- ≤ U APSP can be solved in O ( n . log O (1) U ) time with high probability. Remarks.
The above is not quite a reduction to directed unweighted APSP, since ℓ · M ( n, n/ℓ, n ) is not quite the same as M ⋆ ( n, n/ℓ, n | ℓ ) . For the case of U = ˜ O (1) , we can turn the algorithm into a(randomized) reduction:Define another product C ′′ = A ⊗ B , where C ′′ [ i, j ] is the number of k ’s for which A [ i, k ] + B [ k, j ] = C [ i, j ] , where C [ i, j ] = min k { A [ i, k ] + B [ k, j ] } . If the number C ′′ [ i, j ] is capped at U , we can reduce ⊗ to O ( U ) Min-Plus products: take a random partition R , . . . , R cU , and for each ℓ ∈ [ cU ] , compute min k ∈ R ℓ { A [ i, k ] + B [ k, j ] } . Set C ′′ [ i, j ] to be the number of such witnesses k with min k ∈ R ℓ A [ i, k ] + B [ k, j ] = C [ i, j ] . For each C [ i, j ] , this is correct with probability greater than / for a sufficiently largeconstant c (we can repeat logarithmically many times to lower the error probability).We can reduce • to polylogarithmically many ⊗ products: C ′ [ i, j ] = P p,q ∈ [log U ] p + q P k ( A p ⊗ B q )[ i, j ] , where A p [ i, k ] = A [ i, k ] if the p -th bit of A ′ [ i, k ] is 1, and A p [ i, k ] = ∞ otherwise, and B q [ k, j ] = B [ k, j ] if the q -th bit of B ′ [ k, j ] is 1, and B q [ k, j ] = ∞ otherwise.Section 3 gives a reduction in the other direction, and thus u- ≤ U APSP for ≤ U ≤ ˜ O (1) is equivalentto u-APSP, completing the proof of Theorem 1.7. C.2 Undirected u- ≤ U APSP and u- mod U APSP
For undirected unweighted graphs, Seidel’s algorithm with Zwick’s modification [37] is as follows:given a Boolean adjacency matrix A of G , compute A = A ∨ ( A · A ) here ∨ is componentwise OR and · isthe Boolean matrix product. Let G be the graph defined by A . Recursively compute the pairwise distances d ( u, v ) in G . The base case is when the diameter is which is easy to handle (we work on connectedgraphs, since we can work on each connected component separately, and the diameter roughly halves ineach step.)Then, we note that the distance D [ u, v ] is odd iff v has a neighbor x such that d ( u, v ) ≡ d ( u, x ) + 1(mod 3) as for every neighbor w of v , d ( u, w ) ∈ { d ( u, v ) − , d ( u, v ) , d ( u, v ) + 1 } .So we define for each j ∈ { , , } , B j [ u, v ] = 1 if d ( u, v ) ≡ j (mod 3) and B j [ u, v ] = 0 otherwise.Then we multiply for each j ∈ { , , } , C j = B j · A , and for every u, v , we compute j = ( d ( u, v ) −
1) mod 3 and if C j [ u, v ] = 1 , we conclude that D [ u, v ] is odd. Otherwise, we conclude that D [ u, v ] is even.For every even D [ u, v ] , we can compute it by setting D [ u, v ] = 2 d ( u, v ) , and for odd D [ u, v ] we computeit by setting D [ u, v ] = 2 d ( u, v ) − .Now we want to compute the counts of the paths together with Seidel’s approach. Given A which isnow viewed as an integer matrix where A [ i, j ] is the multiplicity of edge ( i, j ) ( if no edge), we compute ¯ A = A + A over the integers. This defines G , a graph with new multiplicity adjacency matrix ¯ A .Recurse on G , obtaining the distances d ( u, v ) and shortest paths counts c ( u, v ) in G . As in Seidel’salgorithm we can compute D [ u, v ] (the distances in G ) from d ( u, v ) . Now we also want to compute theshortest path counts in G . If D ( u, v ) is even, then d ( u, v ) = D [ u, v ] / and all u − v paths in G correspondto u − v paths in G , and C [ u, v ] = c ( u, v ) , so we can just set these counts because we know which distancesare even. 25f D [ u, v ] is odd, on the other hand, for every predecessor x on a shortest u − v path: (1) the number of u − x paths is C [ u, x ] = c ( u, x ) , and (2) the number of u − v paths going through x is c ( u, x ) · A [ x, v ] .From the above version of Seidel’s algorithm we know that x is a predecessor of v on a u − v shortest path iff D [ u, x ] ≡ D [ u, v ] − . So we can compute the count C [ u, v ] as follows. For each j ∈ { , , } ,let D j be the matrix defined as D j [ u, x ] = c ( u, x ) if D [ u, x ] ≡ j (mod 3) and otherwise. Then set X j to be the product D j A . Now, for every u, v for which D [ u, v ] is odd, let j = ( D [ u, v ] −
1) mod 3 ,and look at X j [ u, v ] . By our discussion, this will be the sum over all x such that x is a neighbor of v and D [ u, x ] ≡ D [ u, v ] − , of c [ u, x ] · A [ x, v ] which is exactly the number of shortest paths from u to v , as the graph is undirected. Hence we can return D and C .If we do computations modulo U , the integers will be bounded by U and the runtime will be ˜ O ( n ω log U ) .If we set all counts that are greater than U as U , we will get the APSP counts capped at U . Theorem C.3. u- mod U APSP and u- ≤ U APSP in undirected graphs can be computed in ˜ O ( n ω log U ) time. C.3 Undirected u- approx- U APSP
We can solve u- approx- U APSP for undirected graphs, interestingly, by adapting the APLSP algorithmin Appendix B.2. The only main change is to replace the Min-Plus products on the secondary distances withstandard matrix products on the counts, with approximation factor O (1 /U ) .Let M ∗∗ ( n , n , n | ℓ ) be the time to compute the standard product of an n × n matrix with an n × n matrix where all finite entries are from [2 ℓ ] . It is known that M ∗∗ ( n , n , n | ℓ ) = ˜ O ( ℓ ·M ( n , n , n ) } ) . (The naive ˜ O ( n n n ) bound still holds when computing the product approximately,with factor O (1 /U ) , ignoring polylog U factors.) In the analysis, we just replace M ⋆ with M ∗∗ , sincethe number of paths of length ℓ is bounded by ˜ O ( ℓ ) .In the unweighted case, we can simplify by setting ∆ = 0 . Dijkstra’s algorithm can be generalized forapproximate counting. The approximation factor may increase to (1 + O (1 /U )) O ( n ) , which is acceptableafter readjusting U by an O ( n ) factor. Theorem C.4. u- approx- U APSP for undirected graphs can be solved in O ( n . log O (1) U ) time. C.4 Exact u-
APSP
For exact counts that could be exponentially large, we will describe a combinatorial ˜ O ( n ) -time algo-rithm to solve u- APSP for directed unweighted graphs, in the standard word RAM model (with (log n ) -bitwords). The algorithm can be viewed as a special case of the method in Appendix B.2 with L = 1 (no matrixmultiplication and no dominating sets are required, and the method turns out to work for the directed case).We first compute D [ u, v ] for all u, v ∈ V in O ( n ) time by known APSP algorithms. There are ofcourse faster APSP algorithms for directed unweighted graphs, but we use the slower O ( n ) time algorithmto keep the whole algorithm combinatorial.Assume we have already computed C [ u, v ] for all u, v with D [ u, v ] ≤ ℓ/ for a given ℓ . Fix a sourcevertex s ∈ V . We will compute C [ s, v ] for all v with D [ s, v ] ≤ ℓ , as follows:Let V i = { v ∈ V : D [ s, v ] = i } . Note that P i | V i | = n , so there exist an index m ∈ [0 . ℓ, . ℓ ] with | V m | = O ( n/ℓ ) .For i ≤ m , we have already computed C [ s, v ] for all v ∈ V i .For i = m +1 , . . . , ℓ , we compute C [ s, v ] for all v ∈ V i by setting C [ s, v ] = P u ∈ V m : D [ u,v ]= i − m C [ s, u ] · C [ u, v ] . Note that C [ s, u ] and C [ u, v ] have been computed from the previous iteration, since i − m < ℓ/ .26he total number of arithmetic operations is O ( P i | V i | · | V m | ) = O ( n /ℓ ) . Since the counts are boundedby O ( n ℓ ) and are ˜ O ( ℓ ) -bit numbers, the total cost is ˜ O ( n /ℓ · ℓ ) = ˜ O ( n ) .We do this for every source s ∈ V . The overall cost is ˜ O ( n ) .We do the above for all ℓ ’s that are powers of / . The final time bound is ˜ O ( n ) . Theorem C.5. u- ˜ O ( n ) time. Remarks.
This is worst-case optimal up to polylogarithmic factors, as the total number of bits in theanswers could be Ω( n ) .Recall the Betweenness Centrality of a vertex v is defined as BC ( v ) = P s,t = v C v [ s, t ] /C [ s, t ] where C v [ s, t ] is the number of shortest paths between s and t that go through v . As an immediate corollary, wecan compute the Betweenness Centrality of a given vertex exactly in a directed unweighted graph in ˜ O ( n ) time (or approximately in an undirected unweighted graph with factor /U in O ( n . log O (1) U ) timeby Theorem C.4). Corollary C.6.
The betweenness centrality of a vertex can be computed in ˜ O ( n ) time in a directed un-weighted graph. Furthermore, it can be approximated with factor /U in an undirected unweightedgraph in O ( n . log O (1) U ) time. In Appendix D, we give more algorithms for u- mod U APSP and u- approx- U APSP for directed graphs.Many of the algorithms in this section (and in Appendix D) can be extended to graphs with weights in [ c ] − { } for any c = ˜ O (1) (for example, the ˜ O ( n ) algorithm still works with ([ c ] − { } ) - APSP aftersome modifications), but in the interest of simplicity, we will not go into the details.
D Alternative ([ c ] − { } ) -Lex -APSP Algorithm and its Applications D.1 ([ c ] − { } ) -Lex -APSP We describe an alternative O ( n . ) -time algorithm for ([ c ] − { } ) -Lex -APSP for directed graphswithout zero edge weights. Although it has identical running time as the (more general) algorithm in Ap-pendix B.1, the approach has applications to certain versions of APSP, as we will see later.
Overview.
The general plan consists of two phases. In the first phase, we compute all shortest-pathdistances for which the primary distances are (roughly) powers of 2; this is done by “repeated squaring”with the Min-Plus product (involving the secondary weights). In the second phase, we compute all shortest-path distances for all primary distances divisible by i , for i = log n down to 1, by more Min-Plus productsusing the matrices computed in the first phase. There were previous APSP algorithms that follow a similarplan for undirected graphs (e.g., Shoshan and Zwick’s algorithm [24]), but the difficulty in our problem isthat there is no clear way to reduce the magnitude of the numbers in the Min-Plus products involving thesecondary weights. We suggest an extra, simple idea: pick a random number x ; then the number of entrieswith primary distance equal to x will be small (this is a simplified statement—in Lemma D.1 below, therandom numbers we use are obtained by multiplying distances with a random scaling factor γ ). This way,we have reduced the sparsity of the matrices in the Min-Plus products, and can then apply Lemma B.2(specifically, the second bound that is sensitive to the number of finite entries of the matrices). Preliminaries.
We first compute D [ u, v ] for all ( u, v ) , by running Zwick’s [ c ] -APSP algorithm onthe primary weights. Lemma D.1.
There exists a real γ ∈ [1 , such that for each i , the number of ( u, v ) with D [ u, v ] ∈{⌊ γj i ⌋ : j ∈ Z + } ± O (1) is ˜ O ( n / i ) . Such a γ can be computed in ˜ O ( n ) time. roof. Pick a random γ ∈ { , n , . . . , } . For any fixed a ∈ [ c n ] and i , we have Pr[ ∃ j ≥ ⌊ γj i ⌋ = a ] = Pr[ ∃ j ≥ aj i ≤ γ < a +1 j i ] = O ( P ⌊ c n/ i ⌋ j =1 1 j i ) = O ((1 / i ) log n ) .Thus, for each fixed ( u, v ) , the probability that D [ u, v ] ∈ {⌊ γj i ⌋ : j ∈ Z + }± O (1) is O ((1 / i ) log n ) .So, the expected number of ( u, v ) with such D [ u, v ] is O (( n / i ) log n ) . By Markov’s inequality, theprobability that the number exceeds c ( n / i ) log n is O (1 / ( c log n )) for a fixed i . We take the unionbound over all i ≤ log( c n ) .To derandomize, we first count the number of ( u, v ) with D [ u, v ] = a , for every a ∈ [ c n ] , in O ( n ) time. Afterwards, for each γ ∈ { , n , . . . , } and each i ≤ log( c n ) , we can check whether γ satisfiesthe property in O ( n ) time. (cid:3) Define D ( ℓ )2 [ u, v ] = D [ u, v ] if D [ u, v ] = ℓ , and ∞ otherwise. We are now ready to present ouralgorithm. Phase 1.
For i = 0 , . . . , log n , we will compute D ( ⌊ γ i ⌋ + b )2 for all b ∈ {− c , . . . , c } as follows:For the base case i = 0 , we can compute D ( b )2 for all b = O (1) by O (1) number of Min-Plus productswith finite entries bounded by O (1) , in ˜ O ( n ω ) time.Fix arbitrary i ≥ . For each ∆ ∈ [ c ] , we will compute selected entries of the Min-Plus prod-uct D ( ⌊ γ i − ⌋ + ⌊ b/ ⌋− ∆+ e i )2 ⋆ D ( ⌊ γ i − ⌋ + ⌈ b/ ⌉ +∆)2 , where e i = ⌊ γ i ⌋ − ⌊ γ i − ⌋ ∈ { , } . Note that D ( ⌊ γ i − ⌋ + ⌊ b/ ⌋− ∆+ e i )2 and D ( ⌊ γ i − ⌋ + ⌈ b/ ⌉ +∆)2 have been computed in the previous iteration. We computeonly those output entries for ( u, v ) with D [ u, v ] = ⌊ γ i ⌋ + b . For each such ( u, v ) , we take the minimumof the output entries over all ∆ ∈ [ c ] .By Lemma D.1, D ( ⌊ γ i − ⌋± O (1))2 and D ( ⌊ γ i ⌋± O (1))2 have ˜ O ( n / i ) finite entries, all from [ O (2 i )] . So,the ˜ O ( n / i ) output entries we want can be computed in time ˜ O ( M ⋆ ( n, n, n | n / i , n / i , n / i | i , i )) . Phase 2.
For i = log n, . . . , , we compute D ( ⌊ γj i ⌋ + b )2 for all j ∈ [ n/ i ] and for all b ∈ {− c , . . . , c } ,as follows:For j ≤ , the answers have already been computed.For j even, the answers have been computed in the previous iteration.Suppose j > is odd. For each ∆ ∈ [ c ] , we compute selected entries of the Min-Plus product D ( ⌊ γ ( j − i ⌋ + ⌊ b/ ⌋− ∆+ e ij )2 ⋆ D ( ⌊ γ i ⌋ + ⌈ b/ ⌉ +∆)2 , where e ij = ⌊ γj i ⌋ − ⌊ γ ( j − i ⌋ − ⌊ γ i ⌋ ∈ { , } . Notethat D ( ⌊ γ ( j − i ⌋ + ⌊ b/ ⌋− ∆)2 has been computed in the previous iteration, and D ( ⌊ γ i ⌋ + ⌈ b/ ⌉ +∆+ e ij )2 has beencomputed in Phase 1. We compute only those output entries for ( u, v ) with D [ u, v ] = ⌊ γj i ⌋ + b . For eachsuch ( u, v ) , we take the minimum of the output entries over all ∆ ∈ [ c ] .Instead of doing the product individually for each j , it is more efficient to combine all the matrices D ( ⌊ γ ( j − i ⌋ + ⌊ b/ ⌋− ∆+ e ij )2 over j ∈ [ n/ i ] . This gives a single matrix (per i, b, ∆ ) with O (( n/ i ) n ) rowsand n columns; by Lemma D.1, this matrix has ˜ O ( n / i ) finite entries, all from [ O ( n )] . The second matrix D ( ⌊ γ i ⌋ + ⌈ b/ ⌉ +∆)2 has ˜ O ( n / i ) finite entries, all from [ O (2 i )] . So, the ˜ O ( n / i ) output entries we want canbe computed in time ˜ O ( M ⋆ ( n / i , n, n | n / i , n / i , n / i | n, i )) . By the end of Phase 2, we have computed D ( ℓ )2 for all ℓ . Standard techniques for generating witnessesfor matrix products can be applied to recover the paths corresponding to the lexicographical shortest-pathdistances [12]. 28 otal time. The cost of Phase 2 dominates. By Lemma B.2 (with the two matrices reversed), the totalcost is bounded by ˜ O (cid:18) max ℓ M ⋆ ( n /ℓ, n, n | n /ℓ, n /ℓ, n /ℓ | n, ℓ ) (cid:19) ≤ ˜ O (cid:18) max ℓ min n n /ℓ, min t ( ℓ · M ( n / ( ℓt ) , n, n ) + tn /ℓ ) o(cid:19) . We choose t = ℓn/L for some parameter L to be determined. The cost is at most ˜ O (cid:18) max ℓ ≤ L ℓ · M ( Ln/ℓ , n, n ) + n /L (cid:19) . The maximum occurs when ℓ = 1 , and so we should choose L to minimize ˜ O ( M ( Ln, n, n ) + n /L ) . Withthe current bounds on rectangular matrix multiplication, we choose L = n . and get O ( n . ) runningtime. D.2 u- mod U APSP
Zwick’s APSP algorithm does not seem generalizable to mod U APSP: when there are exponentiallymany shortest paths, the existence of a small hitting set is unclear.We solve mod U APSP for directed unweighted graphs, interestingly by adapting the APLSP algorithmin Appendix D.1. The only main change is to replace the Min-Plus products on the secondary distanceswith standard matrix products on the counts (over Z U ). In the unweighted case, we can simplify by setting ∆ = 0 . In the analysis, let M U ( n , n , n | m , m , m ) be the time to compute m given entries of thestandard matrix product of an n × n matrix with an n × n matrix, where all entries are integers in [ U ] .By an analog to Lemma B.2, M U ( n , n , n | m , m , m ) = ˜ O (cid:16) min t ( M ( n , n , m /t ) + tm ) (cid:17) , where the ˜ O notation may hide log O (1) ( n n n U ) factors. The cost of the entire algorithm is no worse thanin Appendix D.1, ignoring log O (1) U factors. We thus obtain: Theorem D.2. u- mod U APSP can be solved in O ( n . log O (1) U ) time. D.3 u- approx- U APSP
We can also solve u- approx- U APSP for directed unweighted graphs by adapting the APLSP algorithmin Appendix D.1. As before, the main change is to replace the Min-Plus products on the secondary distanceswith standard matrix products on the counts, but this time with approximation.Let M ∗∗ ( n , n , n | ℓ ) be the time to compute the standard product of an n × n matrix with an n × n matrix where all finite entries are from [2 ℓ ] . It is known that M ∗∗ ( n , n , n | ℓ ) ≤ ˜ O ( ℓ · M ( n , n , n )) .Let M ∗∗ U ( n , n , n | m , m , m | ℓ , ℓ ) be the time to compute m given entries of the standardproduct of an n × n matrix A with an n × n matrix B , where A has at most m finite entries, all from [2 ℓ ] , and B has at most m finite entries, all from [2 ℓ ] , allowing approximation factor /U . Lemma D.3. M ∗∗ U ( n , n , n | m , m , m | ℓ , ℓ ) = ˜ O (cid:16) min t ( M ∗∗ ( n , n , m /t | ℓ ) + tm ) (cid:17) , where the ˜ O notation may hide log O (1) ( n n n U ) factors. roof. The proof is similar to that of Lemma B.2.For each i ∈ [ n ] and j ′ ∈ [ m /t ] , let C [ i, j ′ ] be true iff there exists k ∈ [ n ] such that A [ i, k ] > andgroup j ′ contains an element with row index k . Computing C reduces to taking a Boolean matrix productand has cost O ( M ( n , n , m /t )) .For each i ∈ [ n ] and j ′ ∈ [ m /t ] , suppose that group j ′ is part of row j and the minimum element ingroup j ′ is x ; let b C [ i, j ′ ] = P k : B [ k,j ] ∈ [ x/ (2 ℓ n U ) ,x ] A [ i, k ] · B [ k, j ] . Computing b C reduces to multiplyingtwo matrices with reals entries in [1 , ℓ n U ] after rescaling, or integers in [2 ℓ n U ] after rounding (sincewe allow approximation factor /U ), and has cost ˜ O ( M ∗∗ ( n , n , m /t | ℓ )) .To compute the output entry at position ( i, j ) ∈ [ n ] × [ n ] , we find the group j ′ in column j with thelargest rank such that C [ i, j ′ ] is true. Let x be the minimum element in group j ′ . We compute P k A [ i, k ] · B [ k, j ] over every index k that (i) corresponds to an element in group j ′ , or (ii) corresponds to a leftoverelement in column j , or (iii) has B [ k, j ] ∈ [ x/ (2 ℓ n U ) , x ] . Note that the sum over k with B [ k, j ] We showed that u- c Red-APSP is equivalent to u-APSP for directed graphs when ≤ c = ˜ O (1) , so itrequires Ω( n . ) time unless there is a breakthrough for u-APSP. In this section, we show that u- Red-APSPis actually an easier problem. Theorem E.1. There is an ˜ O ( n ω ) time algorithm for u- Red-APSP in unweighted undirected graphs.Proof. We adapt Seidel’s algorithm. In Seidel’s algorithm, given the adjacency matrix A , we compute A = A ∨ ( A · A ) which represents the adjacency matrix of vertices with distances at most . Then wesolve APSP in the graph with adjacency matrix A recursively, and use that result to compute APSP in theoriginal graph.In the u- Red-APSP problem, the graph contains more information than an adjacency matrix. Let R be the adjacency matrix for red edges, and B be the adjacency matrix for blue edges. We will define ( R, B ) , which basically represents the graph where we combine two adjacent edges into a single edge. Let ( R, B ) = ( R ∨ ( R · B ) ∨ ( B · R ) , B ∨ ( B · B )) , and let D [ u, v ] denote the shortest paths distance between u and v using at most red edge in the graph represented by ( R, B ) . We can compute D recursively. In thebase case, the distance between any two vertices is either or ∞ . The base case is achieved after roughly O (log n ) recursive calls, since after each call, the distances of reachable pairs are roughly halved.For simplicity, we call paths that use at most one red edges “valid”.Suppose we are given D , we can compute the real distances D as follows. If D [ u, v ] = ∞ , then so is D [ u, v ] . In the following, we handle the case when D [ u, v ] < ∞ . We first compute some value ¯ D [ u, v ] ,which is defined as follows: if u has a blue neighbor x (i.e. the edge connecting u and x is blue) such that D [ u, v ] ≡ D [ x, v ] + 1 (mod 3) , we set ¯ D [ u, v ] = 2 D [ u, v ] − ; otherwise, we set ¯ D [ u, v ] = 2 D [ u, v ] .Clearly ¯ D can be computed in O ( n ω ) time using Boolean matrix multiplication.30 laim E.2. ¯ D [ u, v ] ≥ D [ u, v ] . Also, if there exists a shortest valid path from u to v that uses a blue edgeas its first edge, then ¯ D [ u, v ] = D [ u, v ] .Proof. First, notice that D [ u, v ] = ⌈ D [ u, v ] / ⌉ for any pairs of u, v . By triangle inequality, for any blueneighbor x of u , | D [ u, v ] − D [ x, v ] | ≤ , which implies that | D [ u, v ] − D [ x, v ] | ≤ . Therefore, we canignore the mod condition in the construction of ¯ D .The only case when ¯ D [ u, v ] < D [ u, v ] could happen is when u has a blue neighbor x such that D [ u, v ] = D [ x, v ] + 1 and D [ u, v ] = 2 D [ u, v ] . However, in this case, D [ u, v ] ≤ D [ x, v ] ≤ D [ x, v ] = 1 + 2( D [ u, v ] − 1) = D [ u, v ] − , which is a contradiction.Now suppose there exists a shortest valid path from u to v that goes to a blue neighbor x first. If D [ u, v ] is odd, then D [ x, v ] < D [ u, v ] , so we will set ¯ D [ u, v ] = 2 D [ u, v ] − , which equals D [ u, v ] . Similarly, if D [ u, v ] is even, then there cannot exist x such that D [ u, v ] ≡ D [ x, v ] + 1 (mod 3) , so we do the correctthing by setting ¯ D [ u, v ] = 2 D [ u, v ] . (cid:3) We can compute D from ¯ D as follows. If there exists an edge between u and v , then we set D [ u, v ] = 1 ;otherwise, we set D [ u, v ] = min { ¯ D [ u, v ] , ¯ D [ v, u ] } . It is correct since if the length of the shortest valid pathis at least , either the first edge is blue or the last edge is blue. (cid:3) F Simpler Unweighted [ c ] -APSP Shoshan and Zwick [24] gave an algorithm for the standard [ c ] -APSP problem for undirected graphsrunning in ˜ O ( c n ω ) time. In this section, we describe a simple alternative based on our two-phase approachfrom Sections 4 and B.1. Lemma F.1. Let A be an n × n matrix with integer entries from [ ℓ ] ∪ {∞} , and let B be an n × n matrix with (possibly large) integer entries, satisfying the following property: for every i, j, k, k ′ , B [ k, j ] ≤ A [ i, k ] + A [ i, k ′ ] + B [ k ′ , j ] . (1) Then the Min-Plus product of A and B can be computed in O ( M ⋆ ( n , n , n | ℓ )) time.Proof. Define b B ( t ) [ i, j ] = ( B [ i, j ] + tℓ ) mod 6 ℓ . For each t = 0 , . . . , , compute the Min-Plus product A ⋆ b B ( t ) in O ( M ⋆ ( n , n , n | ℓ )) time.Consider an ( i, j ) ∈ [ n ] × [ n ] . Let k be any index such that A [ i, k ] is finite. Let t ∈ { , . . . , } with (( B [ k , j ]+ tℓ ) mod 6 ℓ ) ∈ [2 ℓ, ℓ ] . For all k such that A [ i, k ] is finite, (1) implies that | B [ k , j ] − B [ k, j ] | ≤ ℓ , and so B [ k, j ] − b B ( t ) [ k, j ] = B [ k , j ] − b B ( t ) [ k , j ] . Thus, the index k minimizing A [ i, k ] + B [ k, j ] isthe same as the index k minimizing A [ i, k ] + b B ( t ) [ k, j ] —which we have already found. (cid:3) Let λ [ u, v ] denote the length of a shortest path between u and v , where the length of a path refers to thenumber of edges in the path.For every ℓ that is a power of 3/2, as in Section 4, let R ℓ ⊆ V be a subset of ˜ O ( n/ℓ ) vertices that hits allshortest paths of length ℓ/ [36]. We may assume that R (3 / i ⊇ R (3 / i +1 (as before). Set R = V .For S , S ⊆ V , let D ( S , S ) denote the submatrix of D containing the entries for ( u, v ) ∈ S × S .31 hase 1. We first solve the following subproblem for a given ℓ : compute D [ u, v ] for all ( u, v ) ∈ R ℓ × V with λ [ u, v ] ≤ ℓ . (We don’t know λ [ u, v ] in advance. More precisely, if λ [ u, v ] ≤ ℓ , the computed valueshould be correct; otherwise, the computed value is only guaranteed to be an upper bound.)Suppose we have already computed D [ u, v ] for all ( u, v ) ∈ R ℓ/ × V with λ [ u, v ] ≤ ℓ/ , and thus,by symmetry, for all ( u, v ) ∈ V × R ℓ/ with λ [ u, v ] ≤ ℓ/ .We take the Min-Plus product D ( R ℓ , R ℓ/ ) ⋆ D ( R ℓ/ , V ) . For each ( u, v ) ∈ R ℓ × V , if its outputentry is smaller than the current value of D [ u, v ] , we reset D [ u, v ] to the smaller value. We reset all entriesgreater than c ℓ to ∞ .To justify correctness, observe that for any shortest path π of length between ℓ/ and ℓ , the middle (2 ℓ/ / ℓ/ vertices must contain a vertex of R ℓ/ , which splits π into two subpaths each of length atmost ℓ/ ℓ/ ≤ ℓ/ .The computation takes time ˜ O ( M ⋆ ( n/ℓ, n/ℓ, n | c ℓ )) . We do the above for all ℓ ≤ L that are powersof / (in increasing order). Phase 2. Next we solve the following subproblem for a given ℓ : compute D [ u, v ] for all ( u, v ) ∈ R ℓ/ × V (with no restrictions on λ [ u, v ] ).Suppose we have already computed D [ u, v ] for all ( u, v ) ∈ R ℓ × V , and thus, by symmetry, for all ( u, v ) ∈ V × R ℓ .We take the Min-Plus product D ( R ℓ/ , R ℓ ) ⋆ D ( R ℓ , V ) , keeping only entries bounded by c ℓ in thefirst matrix. For each ( u, v ) ∈ V × R ℓ , if its output entry is smaller than the current value of D [ u, v ] , wereset D [ u, v ] to the smaller value.To justify correctness, recall that for ( u, v ) ∈ R ℓ/ × V , if λ [ u, v ] ≤ ℓ/ , then D [ u, v ] is alreadycomputed in Phase 1. On the other hand, in any shortest path π of length at least ℓ/ , the first ℓ/ verticesof the path must contain a vertex of R ℓ .Observe that in the above product, (1) is satisfied due to the triangle inequality, since the graph isundirected and the matrices D ( R ℓ/ , R ℓ ) and D ( R ℓ , V ) are true shortest path distances (by the inductionhypothesis). Hence, by Lemma F.1, the computation takes time ˜ O ( M ⋆ ( n/ℓ, n/ℓ, n | c ℓ )) . We do the abovefor all ℓ that are powers of / (in decreasing order).As before, standard techniques for generating witnesses for matrix products can be applied to recoverthe shortest paths [12, 36]. Total time. In both phases, the total cost is ˜ O (cid:18) max ℓ M ⋆ ( n/ℓ, n/ℓ, n | c ℓ ) (cid:19) = ˜ O (cid:18) max ℓ c ℓ · M ⋆ ( n/ℓ, n/ℓ, n ) (cid:19) ≤ ˜ O (cid:18) max ℓ c ℓ ( n/ℓ ) ω (cid:19) = ˜ O ( c n ω ) . Remarks. Like Shoshan and Zwick’s result [24], we can also upper-bound the running time by ˜ O ( M ⋆ ( n, n, n | c )) , which is tight, since it is not difficult to reduce M ⋆ ( n/ℓ, n/ℓ, n | c ℓ ) to M ⋆ ( n, n, n | c ) .The algorithm can be easily modified to solve APSP for a special kind of directed graphs with weightsin [ c ] that are “approximately symmetric”, i.e., D [ u, v ] ≤ s D [ v, u ] for every pair u, v . (We change (1) to B [ k, j ] ≤ O ( s )( A [ i, k ] + A [ i, k ′ ]) + B [ k ′ , j ] .) The running time is ˜ O ( c sn ω ) . This rederives and extendsa result by Porat et al. [21], who considered such directed graphs in the unweighted case and obtained an ˜ O ( sn ω ) time algorithm. 323Problem DescriptionMin-Plus product Given an n × m matrix A and an m × p matrix B , compute the matrix C with entries C [ i, j ] = min mk =1 ( A [ i, k ] + B [ k, j ]) .M ⋆ ( n , n , n | M ) Compute the Min-Plus product of an n × n matrix by an n × n matrixwhere both matrices have integer entries in [ M ] .All-pairs shortest paths (APSP) Compute all-pairs shortest paths distances in a graph.APLP for DAGs Compute all-pairs longest paths distances in a directed acyclic graph. c Red-APSP Given a graph in which some edges can be colored red, compute for everypair of vertices s, t the shortest path distance from s to t that uses at most c red edges.Min Witness Equality Product(MinWitnessEq) Given two n × n integer matrices A and B , compute a matrix C with entries C [ i, j ] = min { k ∈ [ n ] : A [ i, k ] = B [ k, j ] } .additive f ( D [ u, v ]) approxi-mate APSP Given a graph where the distance between vertices u and v is D [ u, v ] , com-pute an estimate D ′ [ u, v ] for every u, v such that D [ u, v ] ≤ D ′ [ u, v ] ≤ D [ u, v ] + f ( D [ u, v ]) .All-Pairs Lightest ShortestPaths (APLSP) Given a graph, compute for every pair of vertices s, t the distance from s to t (with respect to the edge weights) and the smallest number of edges overall shortest paths from s and t .All-Pairs Shortest LightestPaths (APSLP) Given a graph, compute for every pair of vertices s, t the smallest numberof edges of the paths from s to t (with respect to the edge weights) and thesmallest length over all such paths from s and t .Lex -APSP Given a graph where each edge e is given two weights w ( e ) , w ( e ) , com-pute for every pair of vertices u, v the lexicographic minimum over all u - v paths π of ( P e ∈ π w ( e ) , P e ∈ π w ( e )) . APSP Given a graph, count the number of shortest paths for every pair of verticesin a graph. mod U APSP Given a graph, count the number of shortest paths module U for every pairof vertices in a graph. ≤ U APSP Given a graph, compute the minimum between the number of shortest pathsand U for every pair of vertices in a graph. approx- U APSP Given a graph, compute a (1 + 1 /U ) -approximation of the number of short-est paths for every pair of vertices in a graph.Betweenness Centrality (BC) Given a graph and a vertex v , compute BC ( v ) = P s,t = v C v [ s, t ] /C [ s, t ] ,where C [ s, t ] is the number of shortest paths between s and t , and C v [ s, t ] is the number of shortest paths between s and t that go through v .Table 1: The problems we consider. For graph problems, we sometimes add a prefix to the problem toconstraint the edge weights of the graph. The prefix “u-” is for unweighted graphs (e.g. u-APSP); the prefix“ [ c ] -” is for graphs with weights in [ c ] (e.g. [ c ] -APSP), similarly for “ [ ± c ]]