Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs
IImproved Algorithms for Computing the Cycle ofMinimum Cost-to-Time Ratio in Directed Graphs ∗ Karl Bringmann † Thomas Dueholm Hansen ‡ Sebastian Krinninger § Abstract
We study the problem of finding the cycle of minimum cost-to-time ratio in a directedgraph with n nodes and m edges. This problem has a long history in combinatorial op-timization and has recently seen interesting applications in the context of quantitativeverification. We focus on strongly polynomial algorithms to cover the use-case where theweights are relatively large compared to the size of the graph. Our main result is an algo-rithm with running time ˜ O ( m / n / ) , which gives the first improvement over Megiddo’s˜ O ( n ) algorithm [JACM’83] for sparse graphs. We further demonstrate how to obtain bothan algorithm with running time n / Ω (√ log n ) on general graphs and an algorithm withrunning time ˜ O ( n ) on constant treewidth graphs. To obtain our main result, we develop aparallel algorithm for negative cycle detection and single-source shortest paths that mightbe of independent interest. We revisit the problem of computing the cycle of minimum cost-to-time ratio (short: minimumratio cycle) of a directed graph in which every edge has a cost and a transit time. The problemhas a long history in combinatorial optimization and has recently become relevant to thecomputer-aided verification community in the context of quantitative verification and synthesisof reactive systems [Cha +
03, CDH10, DKV09, Blo +
09, Cer +
11, Blo +
14, CIP15]. The shift fromquantitative to qualitative properties is motivated by the necessity of taking into account theresource consumption of systems (such as embedded systems) and not just their correctness.For algorithmic purposes, these systems are usually modeled as directed graphs where verticescorrespond to states of the system and edges correspond to transitions between states. Weightson the edges model the resource consumption of transitions. In our case, we allow two types ofresources (called cost and transit time) and are interested in optimizing the ratio between the twoquantities. By giving improved algorithms for finding the minimum ratio cycle we contribute ∗ Accepted to the 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). † Max Planck Institute for Informatics, Saarland Informatics Campus, Germany. Work partially done while visitingAarhus University. ‡ Aarhus University, Denmark. Supported by the Carlsberg Foundation, grant no. CF14-0617. § University of Vienna, Faculty of Computer Science, Austria. Work partially done while at Max Planck Institutefor Informatics and while visiting Aarhus University. We use the notation ˜ O (⋅) to hide factors that are polylogarithmic in n . a r X i v : . [ c s . D S ] A p r o the algorithmic progress that is needed to make the ideas of quantitative verification andsynthesis applicable.From a purely theoretic point of view, the minimum ratio problem is an interesting general-ization of the minimum mean cycle problem. A natural question is whether the running timefor the more general problem can match the running time of computing the minimum cyclemean (modulo lower order terms). In terms of weakly polynomial algorithms, the answer to thisquestion is yes, since a binary search over all possible values reduces the problem to negativecycle detection. In terms of strongly polynomial algorithms, with running time independentof the encoding size of the edge weights, the fastest algorithm for the minimum ratio cycleproblem is due to Megiddo [Meg83] and runs in time ˜ O ( n ) , whereas the minimum mean cyclecan be computed in O ( mn ) time with Karp’s algorithm [Kar78]. This has left an undesirablegap in the case of sparse graphs for more than three decades. Our results.
We improve upon this situation by giving a strongly polynomial time al-gorithm for computing the minimum ratio cycle in time O ( m / n / log n ) (Theorem 4.3 inSection 4). We obtain this result by designing a suitable parallel negative cycle detection algo-rithm and combining it with Megiddo’s parametric search technique [Meg83]. We first presenta slightly simpler randomized version of our algorithm with one-sided error and the samerunning time (Theorem 3.6 in Section 3).As a side result, we develop a new parallel algorithm for negative cycle detection andsingle-source shortest paths (SSSP) that we use as a subroutine in the minimum ratio cyclealgorithm. This new algorithm has work ˜ O ( mn + n h − ) and depth O ( h ) for any log n ≤ h ≤ n .Our algorithm uses techniques from the parallel transitive closure algorithm of Ullman andYannakakis [UY91] (in particular as reviewed in [KS97]) and our contribution lies in extendingthese techniques to directed graphs with positive and negative edge weights. In particular, wepartially answer an open question by Shi and Spencer [Spe97] who previously gave similartrade-offs for single-source shortest paths in undirected graphs with positive edge weights. Wefurther demonstrate how the parametric search technique can be applied to obtain minimumratio cycle algorithms with running time ˜ O ( n ) on constant treewidth graphs (Corollary 5.3 inSection 5). Our algorithms do not use fast matrix multiplication. We finally show that if fastmatrix multiplication is allowed then slight further improvements are possible, specifically wepresent an n / Ω (√ log n ) time algorithm on general graphs (Corollary 6.2 in Section 6). Prior Work.
The minimum ratio problem was introduced to combinatorial optimization inthe 1960s by Dantzig, Blattner, and Rao [DBR67] and Lawler [Law67]. The existing algorithmscan be classified according to their running time bounds as follows: strongly polynomialalgorithms, weakly polynomial algorithms, and pseudopolynomial algorithms. In terms ofstrongly polynomial algorithms for finding the minimum ratio cycle we are aware of thefollowing two results:• O ( n log n + mn log n ) time using Megiddo’s second algorithm [Meg83] together withCole’s technique to reduce a factor of log log n [Col87], In the minimum cycle mean problem we assume the transit time of each edge is 1. O ( mn ) time using Burn’s primal-dual algorithm [Bur91].For the class of weakly polynomial algorithms, the best algorithm is to follow Lawler’s binarysearch approach [Law67, Law76], which solves the problem by performing O ( log ( nW )) callsto a negative cycle detection algorithm. Here W = O ( CT ) if the costs are given as integers from1 to C and the transit times are given as integers from 1 to T . Using an idea for efficient searchof rationals [Pap79], a somewhat more refined analysis by Chatterjee et al. [CIP15] reveals thatit suffices to call the negative cycle detection algorithm O ( log (∣ a ⋅ b ∣)) times when the valueof the minimum ratio cycle is ab . Since the initial publication of Lawler’s idea, the state of theart in negative cycle detection algorithms has become more diverse. Each of the following fivealgorithms gives the best known running time for some range of parameters (and the runningtimes have to be multiplied by the factor log ( nW ) or log (∣ a ⋅ b ∣) to obtain an algorithm for theminimum ratio problem):• O ( mn ) time using a variant of the Bellman-Ford algorithm [For56, Bel58, Moo59],• n / Ω (√ log n ) time using a recent all-pairs shortest paths (APSP) algorithm by Williams [Wil14,CW16],• ˜ O ( n ω W ) time using fast matrix multiplication [San05, YZ05], where 2 ≤ ω < . O ( m √ n log W ) time using Goldberg’s scaling algorithm [Gol95],• ˜ O ( m / log W ) time using the interior point method based algorithm of Cohen et al. [Coh + O ( mnT ) . Other algorithmic approaches,without claiming any running time bounds superior to those reported above, were given byFox [Fox69], v. Golitschek [Gol82], and Dasdan, Irani, and Gupta [DIG99].Recently, the minimum ratio problem has been studied specifically for the special caseof constant treewidth graphs by Chatterjee, Ibsen-Jensen, and Pavlogiannis [CIP15]. Thestate of the art for negative cycle detection on constant treewidth graphs is an algorithm byChaudhuri and Zaroliagis with running time O ( n ) [CZ00], which by Lawler’s binary searchapproach implies an algorithm for the minimum ratio problem with running time O ( n log ( nW )) .Chatterjee et al. [CIP15] report a running time of O ( n log (∣ a ⋅ b ∣)) based on the more refinedbinary search mentioned above and additionally give an algorithm that uses O ( log n ) space(and hence polynomial time).As a subroutine in our minimum ratio cycle algorithm, we use a new parallel algorithmfor negative cycle detection and single-source shortest paths. The parallel SSSP problem hasreceived considerable attention in the literature [Spe97, KS97, Coh97, BTZ98, SS99, Coh00,MS03, Mil +
15, Ble + Note that the more fine-grained analysis of Hartmann and Orlin actually gives a running time of O ( m (∑ u ∈ V max e =( u , v ) t ( e ))) . O ( m log n ) work and O ( n ) depth. For weighted, undirected graphs with positive edge weights, Shi and Spencer [SS99]gave (1) an algorithm with O ( n t − log n log ( nt − ) + m log n ) work and O ( t log n ) depth and (2)an algorithm with O (( n t − + mnt − ) log n ) work and O ( t log n ) depth, for any log n ≤ t ≤ n . In the following, we review some of the tools that we use in designing our algorithm.
We first explain the parametric search technique as outlined in [AST94]. Assume we are givena property P of real numbers that is monotone in the following way: if P( λ ) holds, then also P( λ ′ ) holds for all λ ′ < λ . Our goal is to find λ ∗ , the maximum λ such that P( λ ) holds. In thispaper for example, we will associate with each λ a weighted graph G λ and P is the propertythat G λ has no negative cycle. Assume further that we are given an algorithm A for deciding,for a given λ , whether P( λ ) holds. If λ were known to only assume integer or rational values,we could solve this problem by performing binary search with O ( log W ) calls to the decisionalgorithm, where W is the number of possible values for λ . However, this solution has thedrawback of not yielding a strongly polynomial algorithm.In parametric search we run the decision algorithm ‘generically’ at the maximum λ ∗ . As thealgorithm does not know λ ∗ , we need to take care of its control flow ourselves and any time thealgorithm performs a comparison we have to ‘manually’ evaluate the comparison on behalf ofthe algorithm. If each comparison takes the form of testing the sign of an associated low-degreepolynomial p ( λ ) , this can be done as follows. We first determine all roots of p ( λ ) and check if P( λ ′ ) holds for each such root λ ′ using another instance of the decision algorithm A . This givesus an interval between successive roots containing λ ∗ and we can thus resolve the comparison.With every comparison we make, the interval containing λ ∗ shrinks and at the end of thisprocess we can output a single candidate. If the decision algorithm A has a running time of T ,then the overall algorithm for computing λ ∗ has a running time of O ( T ) .A more sophisticated use of the technique is possible, if in addition to a sequential decisionalgorithm A s we have an efficient parallel decision algorithm A p . The parallel algorithmperforms its computations simultaneously on P p processors. The number of parallel computationsteps until the last processor is finished is called the depth D p of the algorithm, and the numberof operations performed by all processors in total is called the work W p of the algorithm. Forparametric search, we actually only need parallelism w.r.t. comparisons involving the inputvalues. We denote by the comparison depth of A p the number of parallel comparisons (involvinginput values) until the last processor is finished. To be precise, we use an abstract model of parallel computation as formalized in [FL16] to avoid distraction bydetails such as read or write collisions typical to PRAM models.
4e proceed similar to before: We run A p ‘generically’ at the maximum λ ∗ and (conceptually)distribute the work among P p processors. Now in each parallel step, we might have to resolveup to P p comparisons. We first determine all roots of the polynomials associated to thesecomparisons. We then perform a binary search among these roots to determine the intervalof successive roots containing λ ∗ and repeat this process of resolving comparisons at everyparallel step to eventually find out the value of λ ∗ . If the sequential decision algorithm A s hasa running time of T s and the parallel decision algorithm runs on P p processors in D p parallelsteps, then the overall algorithm for computing λ ∗ has a running time of O ( P p D p + D p T s log P p ) .Formally, the guarantees of the technique we just described can be summarized as follows. Theorem 2.1 ([AST94, Meg83]) . Let P be a property of real numbers such that if P( λ ) holds,then also P( λ ′ ) holds for all λ ′ < λ and let A p and A s be algorithms deciding for a given λ whether P( λ ) holds such that• the control flow of A p is only governed by comparisons that test the sign of an associatedpolynomial in λ of constant degree,• A p is a parallel algorithm with work W p and comparison depth D p , and• A s is a sequential algorithm with running time T s .Then there is a (sequential) algorithm for finding the maximum value λ such that P( λ ) holds withrunning time O ( W p + D p T s log W p ) . Note that A p and A s need not necessarily be different algorithms. In most cases however,the fastest sequential algorithm might be the better choice for minimizing running time. We consider a directed graph G = ( V , E , c , t ) , in which every edge e = ( u , v ) has a cost c ( e ) and a transit time t ( e ) . We want to find the cycle C that minimizes the cost-to-time ratio ∑ e ∈ C c ( e )/ ∑ e ∈ C t ( e ) .For any real λ define the graph G λ = ( V , E , w λ ) as the modification of G with weight w λ ( e ) = c ( e ) − λt ( e ) for every edge e ∈ E . The following structural lemma is the foundation ofmany algorithmic approaches towards the problem. Lemma 2.2 ([DBR67, Law76]) . Let λ ∗ be the value of the minimum ratio cycle of G .• For λ > λ ∗ , the value of the minimum weight cycle in G λ is < .• The value of the the minimum weight cycle in G λ ∗ is . Each minimum weight cycle in G λ ∗ is a minimum ratio cycle in G and vice versa.• For λ < λ ∗ , the value of the minimum weight cycle in G λ is > . The obvious algorithmic idea now is to find the right value of λ with a suitable searchstrategy and reduce the problem to a series of negative cycle detection instances.5 .3 Characterization of Negative Cycle Definition 2.3. A potential function p ∶ V → R assigns a value to each vertex of a weighteddirected graph G = ( V , E , w ) . We call a potential function p valid if for every edge e = ( u , v ) ∈ E ,the condition p ( u ) + w ( e ) ≥ p ( v ) holds. The following two lemmas outline an approach for negative cycle detection.
Lemma 2.4 ([EK72]) . A weighted directed graph contains a negative cycle if and only if it has novalid potential function.
Lemma 2.5 ([Joh77]) . Let G = ( V , E , w ) be a weighted directed graph and let G ′ = ( V ′ , E ′ , w ′ ) bethe supergraph of G consisting of the vertices V ′ = V ∪ { s ′ } (i.e. with an additional super-source s ′ ),the edges E ′ = E ∪ { s ′ } × V and the weight function w ′ given by w ′ ( s ′ , v ) = for every vertex v ∈ V and w ′ ( u , v ) = w ( u , v ) for all pairs of vertices u , v ∈ V . If G does not contain a negative cycle, thenthe potential function p defined by p ( v ) = d G ′ ( s ′ , v ) for every vertex v ∈ V is valid for G . Thus, an obvious strategy for negative cycle detection is to design a single-source shortestpaths algorithm that is correct whenever the graph contains no negative cycle. If the graphcontains no negative cycle, then the distances computed by the algorithm can be verified to bea valid potential. If the graph does contain a negative cycle, then the distances computed bythe algorithm will not be a valid potential (because a valid potential does not exist) and we canverify that the potential is not valid.
In our algorithm we use two building blocks for computing shortest paths in the presence of neg-ative edge weights in parallel. The first such building block was also used by Megiddo [Meg83].
Observation 2.6.
By repeated squaring of the min-plus matrix product, all-pairs shortest pathsin a directed graph with real edge weights can be computed using work O ( n log n ) and depth O ( log n ) . The second building block is a subroutine for computing the following restricted version ofshortest paths.
Definition 2.7.
The shortest h -hop path from a vertex s to a vertex t is the path of minimumweight among all paths from s to t with at most h edges. Note that a shortest h -hop path from s to t does not exist, if all paths from s to t use morethan h edges. Furthermore, if there is a shortest path from s to t with at most h edges, then the h -hop shortest path from s to t is a shortest path as well. Shortest h -hop paths can be computedby running h iterations of the Bellman-Ford algorithm [For56, Bel58, Moo59]. Similar toshortest paths, shortest h -hop paths need not be unique. We can enforce uniqueness by puttingsome arbitrary but fixed order on the vertices of the graph and sorting paths according to theinduced lexicographic order on the sequence of vertices of the paths. Note that the Bellman-Fordalgorithm can easily be adapted to optimizing lexicographically as its second objective. The first explicit use of the Bellman-Ford algorithm to compute shortest h -hop paths that we are aware of is inThorup’s dynamic APSP algorithm [Tho05]. bservation 2.8. By performing h iterations of the Bellman-Ford algorithm, the lexicographicallysmallest shortest h -hop path from a designated source vertex s to each other vertex in a directedgraph with real edge weights can be computed using work O ( mh ) and depth O ( h ) . We denote by π ( s , t ) the lexicographically smallest shortest path from s to t and by π h ( s , t ) the lexicographically smallest shortest h -hop path from s to t . We denote by V ( π h ( s , t )) and E ( π h ( s , t )) the set of nodes and edges of π h ( s , t ) , respectively. Definition 2.9.
Given a collection of sets
S ⊆ U over a universe U , a hitting set is a set T ⊆ H that has non-empty intersection with every set of S (i.e., S ∩ T ≠ ∅ for every S ∈ S ). Computing a hitting set of minimum size is an NP -hard problem. For our purpose however,rough approximations are good enough. The first method to get a sufficiently small hitting setuses a simple randomized sampling idea and was introduced to the design of graph algorithmsby Ullman and Yannakakis [UY91]. We use the following formulation. Lemma 2.10.
Let c ≥ , let U be a set of size s and let S = { S , S , . . . , S k } be a collection of setsover the universe U of size at least q . Let T be a subset of U that was obtained by choosing eachelement of U independently with probability p = min ( x / q , ) where x = c ln ( ks ) + . Then, withhigh probability (whp), i.e., probability at least − / s c , the following two properties hold:1. For every ≤ i ≤ k , the set S i contains an element of T , i.e., S i ∩ T ≠ ∅ .2. ∣ T ∣ ≤ xs / q = O ( cs log ( ks )/ q ) . The second method is to use a heuristic to compute an approximately minimum hitting set.In the sequential model, a simple greedy algorithm computes an O ( log n ) -approximation [Joh74,ADP80]. We use the following formulation. Lemma 2.11.
Let U be a set of size s and let S = { S , S , . . . , S k } be a collection of sets over theuniverse U of size at least q . Consider the simple greedy algorithm that picks an element u in U that is contained in the largest number of sets in S and then removes all sets containing u from S ,repeating this step until S = ∅ . Then the set T of elements picked by this algorithm satisfies:1. For every ≤ i ≤ k , the set S i contains an element of T , i.e., S i ∩ T ≠ ∅ .2. ∣ T ∣ ≤ O ( s log ( k )/ q ) .Proof. We follow the standard proof of the approximation ratio O ( log n ) for the greedy setcover heuristic. The first statement is immediate, since we only remove sets when they arehit by the picked element. Since each of the k sets contains at least q elements, on averageeach element in U is contained in at least kq / s sets. Thus, the element u picked by the greedyalgorithm is contained in at least kq / s sets. The remaining number of sets is thus at most k − kq / s = k ( − q / s ) . Note that the remaining sets still have size at least q , since they do notcontain the picked element u . Inductively, we thus obtain that after i iterations the number ofremaining sets is at most k ( − q / s ) i , so after O ( log ( k ) ⋅ s / q ) iterations the number of remainingsets is less than 1 and the process stops. (cid:3) . Theorem 2.12 ([BRS94]) . Let
S = { S , S , . . . , S k } be a collection of sets over the universe U , let n = ∣ U ∣ and m = ∑ ≤ i ≤ k ∣ S i ∣ . For < ε < , there is an algorithm with work O (( m + n ) ε − log n log m log ( nm )) and depth O ( ε − log n log m log ( nm )) that produces a hitting setof S of size at most ( + ε )( + ln ∆ ) ⋅ OPT , where ∆ is the maximum number of occurrences ofany element of U in S and OPT is the size of a minimum hitting set. In the following we design a parallel SSSP algorithm that can be used to check for negativecycles. Formally, we will in this subsection prove the following statement.
Theorem 3.1.
There is an algorithm that, given a weighted directed graph G = (V, E, w) containingno negative cycles, computes the shortest paths from a designated source vertex s to all other verticesspending O ( mn log n + n h − log n ) work with O ( h + log n ) depth for any ≤ h ≤ n . The algorithmis correct with high probability and all its comparisons are performed on sums of edge weights onboth sides. The algorithm proceeds in the following steps:1. Let C ⊆ V be a set containing each vertex v independently with probability p = min ( ch − ln n , ) for a sufficiently large constant c .2. If ∣ C ∣ > cnh − ln n , then terminate.3. For every vertex x ∈ C ∪ { s } and every vertex v ∈ V , compute the shortest h -hop pathfrom x to v in G and its weight d hG ( x , v ) .4. Construct the graph H = ( C ∪ { s } , ( C ∪ { s }) , w H ) whose set of vertices is C ∪ { s } , whoseset of edges is ( C ∪ { s }) and for every pair of vertices x , y ∈ C ∪ { s } the weight of theedge ( x , y ) is w H ( x , y ) = d hG ( x , y ) .5. For every vertex x ∈ C , compute the shortest path from s to x in H and its weight d H ( s , x ) .6. For every vertex t ∈ V , set δ ( t ) = min x ∈ C ∪{ s } ( d H ( s , x ) + d hG ( x , t )) . Berger et al. actually give an approximation algorithm for the following slightly more general problem: Given ahypergraph H = ( V , E ) and a cost function c ∶ V → R on the vertices, find a minimum cost subset R ⊆ V that covers H , i.e., an R that minimizes c ( R ) = ∑ v ∈ R c ( v ) subject to the constraint e ∩ R ≠ ∅ for all e ∈ E . .1.1 Correctness In order to prove the correctness of the algorithm, we first observe that as a direct consequenceof Lemma 2.10 the randomly selected vertices in C with high probability hit all lexicographicallysmallest shortest ⌊ h / ⌋ -hop paths of the graph. Observation 3.2.
Consider the collection of sets
S = { V ( π ⌊ h / ⌋ ( u , v )) ∣ u , v ∈ V with d ⌊ h / ⌋ G ( u , v ) < ∞ and ∣ E ( π ⌊ h / ⌋ ( u , v ))∣ = ⌊ h / ⌋} containing the vertices of the lexicographically smallest shortest ⌊ h / ⌋ -hop paths with exactly ⌊ h / ⌋ edges between all pairs of vertices. Then, with high probability, C is a hitting set of S of size atmost cnh − ln n . Lemma 3.3. If G contains no negative cycle, then δ ( t ) = d G ( s , t ) for every vertex t ∈ V with highprobability.Proof. First note that the algorithm incorrectly terminates in Step 2 only with small probability.We now need to show that, for every vertex t ∈ V , δ ( t ) ∶= min x ∈ C ∪{ s } ( d H ( s , x ) + d hG ( x , t )) = d G ( s , t ) . First observe that every edge in H corresponds to a path in G (of the same weight).Thus, the value δ ( t ) corresponds to some path in G from s to t (of the same weight) whichimplies that d G ( s , t ) ≤ δ ( t ) (as no path can have weight less than the distance).Now let π ( s , t ) be the lexicographically smallest shortest path from s to t in G . Subdivide π into consecutive subpaths π , . . . , π k such that π i for 1 ≤ i ≤ k − ⌊ h / ⌋ edges,and π k has at most ⌊ h / ⌋ edges. Note that if π itself has at most ⌊ h / ⌋ edges, then k =
1. Sinceevery subpath of a lexicographically smallest shortest path is also a lexicographically smallestshortest path, the paths π , . . . , π k are lexicographically smallest shortest paths as well. As thesubpaths π , . . . , π k − consist of exactly ⌊ h / ⌋ edges, each of them is contained in the collectionof sets S of Observation 3.2. Therefore, each subpath π i , for 1 ≤ i ≤ k −
1, contains a vertex x i ∈ C with high probability.Set x = s and x k = t , and observe that for every 0 ≤ i ≤ k −
1, the subpath of π ( s , t ) from x i to x i + is a shortest path from x i to x i + with at most h edges and thus d hG ( x i , x i + ) = d G ( x i , x i + ) .We now get the following chain of inequalities: d G ( s , t ) = ∑ ≤ i ≤ k − d G ( x i , x i + ) = ∑ ≤ i ≤ k − d hG ( x i , x i + )= ( ∑ ≤ i ≤ k − w H ( x i , x i + )) + d hG ( x k − , t )≥ d H ( x , x k − ) + d hG ( x k − , t )= d H ( s , x k − ) + d hG ( x k − , t )≥ min x ∈ C ∪{ s } ( d H ( s , x ) + d hG ( x , t )) = δ ( t ) . (cid:3) Note that we have formally argued only that the algorithm correctly computes the distances from s . It can easily be checked that the shortest paths can be obtained by replacing the edgesof H with their corresponding paths in G . 9 .1.2 Running TimeLemma 3.4. The algorithm above can be implemented with O ( mn log n + n h − log n ) and O ( h + log n ) depth such that all its comparisons are performed on sums of edge weights on bothsides.Proof. Clearly, in Steps 1–2, the algorithm spends O ( m + n ) work with O ( ) depth. Step 3 canbe carried out by running h iterations of Bellman-Ford for every vertex x ∈ C in parallel (seeLemma 2.8), thus spending O (∣ C ∣ ⋅ mh ) work with O ( h ) depth. Step 4 can be carried out byspending O (∣ C ∣ ) work with O ( ) depth. Step 5 can be carried out by running the min-plusmatrix multiplication based APSP algorithm (see Lemma 2.6), thus spending O (∣ C ∣ log n ) workwith O ( log n ) depth. The naive implementation of Step 6 spends O ( n ∣ C ∣) work with O (∣ C ∣) depth. Using a bottom-up ‘tournament’ approach where in each round we pair up all valuesand let the maximum value of each pair proceed to the next round, this can be improved towork O ( n ∣ C ∣) and depth O ( log n ) .It follows that by carrying out the steps of the algorithm sequentially as explained above, theoverall work is O (∣ C ∣ ⋅ mh + ∣ C ∣ log n ) and the depth is O ( h + log n ) . As the algorithm ensuresthat ∣ C ∣ ≤ cnh − ln n for some constant c , the work is O ( mn log n + n h − log n ) and the depthis O ( h + log n ) . (cid:3) To check whether a weighted graph G = ( V , E , w ) contains a negative cycle, we first constructthe graph G ′ (with an additional super-source s ′ ) as defined in Lemma 2.5. We then run theSSSP algorithm of Theorem 3.1 from s ′ in G ′ and set p ( v ) = d G ′ ( s ′ , t ) for every vertex t ∈ V . Wethen check whether the function p defined in this way is a valid potential function for G testingfor every edge e = ( u , v ) (in parallel) whether p ( u ) + w ( u , v ) ≥ p ( v ) . If this is the case, then weoutput that G contains no negative cycle, otherwise we output that G contains a negative cycle. Corollary 3.5.
There is a randomized algorithm that checks whether a given weighted directedgraph contains a negative cycle with O ( mn log n + n h − log n ) work and O ( h + log n ) depth forany ≤ h ≤ n . The algorithm is correct with high probability and all its comparisons are performedon sums of edge weights on both sides.Proof. Constructing the graph G ′ and checking whether p is a valid potential can both becarried out with O ( m + n ) work and O ( ) depth. Thus, the overall work and depth bounds areasymptotically equal to the SSSP algorithm of Theorem 3.1.If G contains no negative cycle, then the SSSP algorithm correctly computes the distancesfrom s ′ in G ′ . Thus, the potential p is valid by Lemma 2.5 and our algorithm correctly outputsthat there is no negative cycle. If G contains a negative cycle, then it does not have any validpotential by Lemma 2.4. Thus, the potential p defined by the algorithm cannot be valid and thealgorithm outputs correctly that G contains a negative cycle. (cid:3) Using the negative cycle detection algorithm as a subroutine, we obtain an algorithm forcomputing a minimum ratio cycle in time ˜ O ( n / m / ) .10 heorem 3.6. There is a randomized one-sided-error Monte Carlo algorithm for computing aminimum ratio cycle with running time O ( n / m / log n ) .Proof. By Lemma 2.2 we can compute the value of the minimum ratio cycle by finding the largestvalue of λ such that G λ contains no negative-weight cycle. We want to apply Theorem 2.1to find this maximum λ ∗ by parametric search. As the sequential negative cycle detectionalgorithm A s we use Orlin’s minimum weight cycle algorithm [Orl17] with running time T ( n , m ) = O ( mn ) . The parallel negative cycle detection algorithm A p of Corollary 3.5 haswork W ( n , m ) = O ( mn log n + n h − log n ) and depth D ( n , m ) = O ( h + log n ) , for any choice of1 ≤ h ≤ n . Any comparison the latter algorithm performs is comparing sums of edge weights ofthe graph. Since in G λ edge weights are linear functions in λ , the control flow only depends ontesting the sign of degree-1 polynomials in λ . Thus, Theorem 2.1 is applicable and we arriveat a sequential algorithm for finding the value of the minimum ratio cycle with running time O ( mn log n ( h + log n ) + n h − log n ) . Finally, to output the minimum ratio cycle and not just itsvalue, we run Orlin’s algorithm for finding the minimum weight cycle in G λ ∗ , which takes time O ( mn ) . By setting h = n / m − / log n the overall running time becomes O ( n / m / log n ) . (cid:3) We now present a deterministic variant of our minimum ratio cycle algorithm, with the samerunning time as the randomized algorithm up to logarithmic factors.
We can derandomize our SSSP algorithm by combining a preprocessing step with the parallelhitting set approximation algorithm of [BRS94]. Formally, we will prove the following statement.
Theorem 4.1.
There is a deterministic algorithm that, given a weighted directed graph containingno negative cycles, computes the shortest paths from a designated source vertex s to all othervertices spending O ( mn log n + n h − log n + n h log n ) work with O ( h + log n ) depth for any ≤ h ≤ n . From this, using Lemmas 2.4 and 2.5 analogously to the proof of Corollary 3.5, we get thefollowing corollary for negative cycle detection.
Corollary 4.2.
There is a deterministic algorithm that checks whether a given weighted directedgraph contains a negative cycle with O ( mn log n + n h − log n + n h log n ) work and O ( h + log n ) depth for any ≤ h ≤ n . Our deterministic SSSP algorithm does the following: Formally, Theorem 2.1 only applies to deterministic algorithms. However, only step 1 of our parallel algorithmis randomized, but this step does not depend on λ . All remaining steps are deterministic. We can thus first performsteps 1 and 2, and invoke Theorem 2.1 only on the remaining algorithm. The output guarantee then holds with highprobability.
11. For all pairs of vertices u , v ∈ V , compute the shortest ⌊ h / ⌋ -hop path π ⌊ h / ⌋ ( u , v ) from u to v in G .
2. Compute an O ( log n ) -approximate set cover C of the system of sets S = { V ( π ⌊ h / ⌋ ( u , v )) ∣ u , v ∈ V with d ⌊ h / ⌋ G ( u , v ) < ∞ and ∣ E ( π ⌊ h / ⌋ ( u , v ))∣ = ⌊ h / ⌋} .3. Proceed with steps 3 to 6 of the algorithm in Section 3.1. Correctness is immediate: In the previous proof of Lemma 3.3 we relied on the fact that C is ahitting set of S . In the above algorithm, this property is guaranteed directly. Step 1 can be carried out by running h iterations of the Bellman-Ford algorithm for everyvertex v ∈ V . By Lemma 2.8 this uses O ( mnh ) work and O ( h ) depth. We carry out Step 2by running the algorithm of Theorem 2.12 to compute an O ( log n ) -approximate hitting setof S with work O ( n h log n ) and depth O ( log n ) . Lemma 2.10 gives a randomized processthat computes a hitting set of S of expected size O ( nh − log n ) . By the probabilistic method,this implies that there exists a hitting set of size O ( nh − log n ) . We can therefore use thealgorithm of Theorem 2.12 to compute a hitting set S of size O ( nh − log n ) . The work is O ( n h log n ) and the depth is O ( log n ) . Carrying out the remaining steps with a hitting set C of size O ( nh − log n ) uses work O ( mh ∣ C ∣ + ∣ C ∣ log n ) = O ( mn log n + n h − log n ) and depth O ( h + log n ) . Thus, our overall SSSP algorithm has work O ( mn log n + n h − log n + n h log n ) and depth O ( h + log n ) . We again obtain a minimum ratio cycle algorithm by applying parametric search (Theorem 2.1).We obtain the same running time bound as for the randomized algorithm.
Theorem 4.3.
There is a deterministic algorithm for computing a minimum ratio cycle withrunning time O ( n / m / log n ) .Proof sketch. The proof is analogous to the proof of Theorem 3.6, with the only exceptionthat we use the deterministic parallel negative cycle detection algorithm of Corollary 4.2.However, we do not necessarily need to run the algorithm of Theorem 2.12 to compute anapproximate hitting set. Instead we can also run the greedy set cover heuristic (Lemma 2.11) forthis purpose. The reason is that at this stage, the greedy heuristic does not need to performany comparisons involving the edge weights of the input graph, which are the only operationsthat are costly in the parametric search technique. This means that finding an approximate Note that in case there are multiple shortest ⌊ h / ⌋ -hop paths from u to v , any tie-breaking is fine for thealgorithm and its analysis. C of size O ( nh − log n ) can be implemented with O (∑ S ∈S ∣ S ∣) = O ( n h ) work and O ( ) comparison depth. Thus, we use a parallel negative cycle detection algorithm A p whichhas work W ( n , m ) = O ( mh ∣ C ∣ + ∣ C ∣ log n + n h ) = O ( mn log n + n h − log n + n h ) and depth D ( n , m ) = O ( h + log n ) , for any choice of 1 ≤ h ≤ n . We thus obtain a sequential minimum ratiocycle algorithm with running time O ( mn log n + n h − log n + n h + mn log n ( h + log n )) , for anychoice of 1 ≤ h ≤ n . Note that the summands mn log n and n h are both dominated by the lastsummand mn log n ( h + log n ) . Setting h = n / m − / log n to optimize the remaining summands,the running time becomes O ( n / m / log n ) . (cid:3) In the following we demonstrate how to obtain a nearly-linear time algorithm (in the stronglypolynomial sense) for graphs of constant treewidth. We can use the following results ofChaudhuri and Zaroliagis [CZ00] who studied the shortest paths problem in graphs of constanttreewidth. Theorem 5.1 ([CZ00]) . There is a deterministic algorithm that, given a weighted directed graphcontaining no negative cycles, computes a data structure that after O ( n ) preprocessing time cananswer, for any pair of vertices, distance queries in time O ( α ( n )) , where α (⋅) is the inverseAckermann function. It can also report a corresponding shortest path in time O ( (cid:96) α ( n )) , where (cid:96) isthe number of edges of the reported path. Theorem 5.2 ([CZ98]) . There is a deterministic negative cycle algorithm for weighted directedgraphs of constant treewidth with O ( n ) work and O ( log n ) depth. We now apply the reduction of Theorem 2.1 to the algorithm of Theorem 5.2 to find λ ∗ ,the value of the minimum ratio cycle, in time O ( n log n ) (using T s ( n ) = W p ( n ) = O ( n ) , and D p ( n ) = O ( log n ) ). We then use the algorithm of Theorem 5.1 to find a minimum weightcycle in G λ ∗ in time O ( nα ( n )) : Each edge e = ( u , v ) together with the shortest path from v to u (if it exists) defines a cycle and we need to find the one of minimum weight by asking thecorresponding distance queries. For the edge e = ( u , v ) defining the minimum weight cyclewe query for the corresponding shortest path from v to u . This takes time O ( n ) as a graph ofconstant treewidth has O ( n ) edges. We thus arrive at the following guarantees of the overallalgorithm. Corollary 5.3.
There is a deterministic algorithm that computes the minimum ratio cycle in adirected graph of constant treewidth in time O ( n log n ) . All our previous algorithm do not make use of fast matrix multiplication. We now show thatif we allow fast matrix multiplication, despite the hidden constant factors being galactic, then The first result of Chaudhuri and Zaroliagis [CZ00] has recently been complemented with a space-time trade-offby Chatterjee, Ibsen-Jensen, and Pavlogiannis [CIP16] at the cost of polynomial preprocessing time that is too largefor our purposes. n / Ω (√ log n ) of Williams’s recent APSP algorithm [Wil14] (with a deterministic version byChan and Williams [CW16]) can be salvaged for the minimum ratio problem. In particular, weexplain why Williams’ algorithm for min-plus matrix multiplication parallelizes well enough. Theorem 6.1.
There is a deterministic algorithm that checks whether a given weighted directedgraph contains a negative cycle with n / Ω (√ log n ) work and O ( log n ) comparison depth.Proof sketch. First, note that the value of the minimum weight cycle in a directed graph can befound by computing min e =( u , v )∈ E w ( u , v ) + d G ( v , u ) , i.e., the cycle of minimum weight amongall cycles consisting of first an edge e = ( u , v ) and then the shortest path from u to v is theglobal minimum weight cycle. If all pairwise distances are already given, then computing thevalue of the minimum weight cycle (and thus also checking for a negative cycle) can thereforebe done with O ( n ) work and O ( log n ) depth (again by a ‘tournament’ approach).The APSP problem can in turn be reduced to min-plus matrix multiplication [AHU76].Let M be the adjacency matrix of the graph where additionally all diagonal entries are set to0. Recall that the the all-pairs distance matrix is given M n − , where matrix multiplication isperformed in the min-plus semiring. By repeated squaring, this matrix can be computed with O ( log n ) min-plus matrix multiplications. Williams’s principal approach for computing themin-plus product C of two matrices A and B is as follows.(A1) Split the matrices A and B into rectangular submatrices of dimensions n × d and d × n ,respectively, where d = Θ (√ log n ) , as follows: For every 1 ≤ k ≤ ⌈ n / d ⌉ − A k contains the k -th group of d consecutive columns of A and B k contains the k -th group of d consecutiverows of B ; for k = ⌈ n / d ⌉ , A k contains the remaining columns of A and B k contains theremaining rows of B .(A2) For each 1 ≤ k ≤ ⌈ n / d ⌉ , compute C k , the min-plus product of A k and B k (using thealgorithm described below).(A3) Determine the min-plus product of A and B by taking the entrywise minimum C ∶= min ≤ k ≤⌈ n / d ⌉ C k .To carry out Step (A2), Williams first uses a preprocessing stage applied to each pair ofmatrices A k and B k (for 1 ≤ k ≤ ⌈ n / d ⌉ ) individually. It consists of the following three steps:(B1) Compute matrices A ∗ k and B ∗ k of dimensions n × d and d × n , respectively, as follows: Set A ∗ k [ i , p ] ∶= A k [ i , p ] ⋅ ( n + ) + p , for every 1 ≤ i ≤ n and 1 ≤ p ≤ d , and set B ∗ k [ q , j ] ∶= B k [ q , j ] ⋅ ( n + ) + q , for every 1 ≤ q ≤ d and 1 ≤ j ≤ n .(B2) Compute matrices A ′ k and B ′ k of dimensions n × d and d × n , respectively, as follows:Set A ′ k [ i , ( p , q )] ∶= A ∗ k [ i , p ] − A ∗ k [ i , q ] , for every 1 ≤ i ≤ n and 1 ≤ p , q ≤ d , and set B ′ k [( p , q ) , j ] ∶= B ∗ k [ q , j ] − B ∗ k [ p , j ] , for every 1 ≤ j ≤ n , 1 ≤ p , q ≤ d .(B3) For every pair p , q (1 ≤ p , q ≤ d ) , compute and sort the set S p , qk ∶= { A ′ k [ i , ( p , q )] ∣ ≤ i ≤ n } ∪ { B ′ k [( p , q ) , j ] ∣ ≤ j ≤ n } , where ties are broken such that entries of A ′ k haveprecedence over entries of B ′ k . Then compute matrices A ′′ k and B ′′ k of dimensions n × d d × n , respectively, as follows: Set A ′′ k [ i , ( p , q )] to the rank of the value A ′ k [ i , ( p , q )] in the sorted order of S p , qk , for every 1 ≤ i ≤ n and 1 ≤ p , q ≤ d , and set B ′′ k [( p , q ) , j ] tothe rank of the value B ′ k [( p , q ) , j ] in the sorted order of S p , qk , for every 1 ≤ j ≤ n and1 ≤ p , q ≤ d .This type of preprocessing is also known as Fredman’s trick [Fre76]. As Williams shows, theproblem of computing C k now amounts to finding, for every 1 ≤ i ≤ n and 1 ≤ j ≤ n , the unique p ∗ such that A ′′ i , ( p ∗ , q ) ≤ B ′′( p ∗ , q ) , j for all 1 ≤ q ≤ d . Using tools from circuit complexity and fastrectangular matrix multiplication, this can be done in time ˜ O ( n ) , either with a randomizedalgorithm [Wil14], or, with slightly worse constants in the choice of d (and thus the exponentof the overall algorithm), with a deterministic algorithm [CW16]. The crucial observation forour application is that after the preprocessing stage no comparisons involving the input valuesare performed anymore since all computations are performed with regard to the matrices A ′′ k and B ′′ k , which only contain the ranks (i.e., integer values from 1 to 2 n ).The claimed work bound follows from Williams’s running time analysis. We can boundthe comparison depth as follows. First note that apart from Steps (A3) and (B3) we only incur O ( log n ) overhead in the depth. Step (A3) can be implemented with O ( log n ) depth by using atournament approach for finding the respective minima. For Step (B3) we can use a parallelversion of merge sort on n items that has work O ( n log n ) and depth O ( log n ) [Col88]. (cid:3) We now apply the reduction of Theorem 2.1 to the algorithm of Theorem 6.1 to find λ ∗ , thevalue of the minimum ratio cycle, in time n / Ω (√ log n ) (using T s ( n ) = W p ( n ) = n / Ω (√ log n ) ,and D p ( n ) = O ( log n ) ). We then use Williams’ APSP algorithm to find a minimum weight cyclein G λ ∗ in time n / Ω (√ log n ) . We thus arrive at the following guarantees of the overall algorithm. Corollary 6.2.
There is deterministic algorithm for computing a minimum ratio cycle withrunning time n / Ω (√ log n ) . We have presented a faster strongly polynomial algorithm for finding a cycle of minimum cost-to-time ratio, a problem which has a long history in combinatorial optimization and recentlybecame relevant in the context of quantitative verification. Our approach combines parametricsearch with new parallelizable single-source shortest path algorithms and also yields smallimprovements for graphs of constant treewidth and in the dense regime. The main open problemis to push the running time down to ˜ O ( mn ) , nearly matching the strongly polynomial upperbound for the less general problem of finding a minimum mean cycle. References [ADP80] Giorgio Ausiello, Alessandro D’Atri, and Marco Protasi. “Structure Preserving Reduc-tions among Convex Optimization Problems”. In:
Journal of Computer and SystemSciences
The Design and Analysis ofComputer Algorithms . Addison-Wesley, 1976 (cit. on p. 14).[AST94] Pankaj K. Agarwal, Micha Sharir, and Sivan Toledo. “Applications of ParametricSearching in Geometric Optimization”. In:
Journal of Algorithms
Quarterly of Applied Mathematics +
16] Guy E. Blelloch, Yan Gu, Yihan Sun, and Kanat Tangwongsan. “Parallel ShortestPaths Using Radius Stepping”. In:
Symposium on Parallelism in Algorithms andArchitectures (SPAA) . 2016, pp. 443–454 (cit. on p. 3).[Blo +
09] Roderick Bloem, Krishnendu Chatterjee, Thomas A. Henzinger, and Barbara Jobst-mann. “Better Quality in Synthesis through Quantitative Objectives”. In:
Interna-tional Conference on Computer-Aided Verification (CAV) . 2009, pp. 140–156 (cit. onp. 1).[Blo +
14] Roderick Bloem, Krishnendu Chatterjee, Karin Greimel, Thomas A. Henzinger,Georg Hofferek, Barbara Jobstmann, Bettina Könighofer, and Robert Könighofer.“Synthesizing robust systems”. In:
Acta Informatica
Journal of Computer andSystem Sciences
Journal of Parallel andDistributed Computing
ACM Transactions on Computational Logic +
11] Pavol Cerný, Krishnendu Chatterjee, Thomas A. Henzinger, Arjun Radhakrishna,and Rohit Singh. “Quantitative Synthesis for Concurrent Programs”. In:
InternationalConference on Computer-Aided Verification (CAV) . 2011, pp. 243–259 (cit. on p. 1).[Cha +
03] Arindam Chakrabarti, Luca de Alfaro, Thomas A. Henzinger, and Mariëlle Stoelinga.“Resource Interfaces”. In:
International Conference on Embedded Software (EMSOFT) .2003, pp. 117–133 (cit. on p. 1).[CIP15] Krishnendu Chatterjee, Rasmus Ibsen-Jensen, and Andreas Pavlogiannis. “FasterAlgorithms for Quantitative Verification in Constant Treewidth Graphs”. In:
Inter-national Conference on Computer-Aided Verification (CAV) . 2015, pp. 140–157 (cit. onpp. 1, 3). 16CIP16] Krishnendu Chatterjee, Rasmus Ibsen-Jensen, and Andreas Pavlogiannis. “OptimalReachability and a Space-Time Tradeoff for Distance Queries in Constant-TreewidthGraphs”. In:
European Symposium on Algorithms (ESA) . 2016, 28:1–28:17 (cit. onp. 13).[Coh +
17] Michael B. Cohen, Aleksander Madry, Piotr Sankowski, and Adrian Vladu. “Negative-Weight Shortest Paths and Unit Capacity Minimum Cost Flow in ˜ O ( m / log W ) Time”. In:
Symposium on Discrete Algorithms (SODA) . 2017, pp. 752–771 (cit. on p. 3).[Coh00] Edith Cohen. “Polylog-time and near-linear work approximation scheme for undi-rected shortest paths”. In:
Journal of the ACM
Journal of Algorithms
Journal of the ACM
SIAM Journal on Computing
Symposium on DiscreteAlgorithms (SODA) . 2016, pp. 1246–1255 (cit. on pp. 3, 14, 15).[CZ00] Shiva Chaudhuri and Christos D. Zaroliagis. “Shortest Paths in Digraphs of SmallTreewidth. Part I: Sequential Algorithms”. In:
Algorithmica
Theoretical Computer Science
Theory of Graphs .Ed. by P. Rosenstiehl. Dunod, Paris, Gordon, and Breach, New York, 1967, pp. 77–84(cit. on pp. 2, 5).[DIG99] Ali Dasdan, Sandy Irani, and Rajesh K. Gupta. “Efficient Algorithms for OptimumCycle Mean and Optimum Cost to Time Ratio Problems”. In:
Design AutomationConference (DAC) . 1999, pp. 37–42 (cit. on p. 3).[DKV09] Manfred Droste, Werner Kuich, and Heiko Vogler, eds.
Handbook of Weighted Au-tomata . Springer, 2009 (cit. on p. 1).[EK72] Jack Edmonds and Richard M. Karp. “Theoretical Improvements in AlgorithmicEfficiency for Network Flow Problems”. In:
Journal of the ACM
Symposium on Parallelism inAlgorithms and Architectures (SPAA) . 2016, pp. 455–466 (cit. on p. 4).[For56] L. R. Ford.
Network Flow Theory . Tech. rep. P-923. The RAND Corporation, 1956(cit. on pp. 3, 6).[Fox69] Bennett Fox. “Finding Minimal Cost-Time Ratio Circuits”. In:
Operations Research
SIAM Journal on Computing
InternationalSymposium on Symbolic and Algebraic Computation (ISSAC) . 2014, pp. 296–303 (cit.on p. 3).[GHH92] Sabih H. Gerez, Sonia M. Heemstra de Groot, and Otto E. Herrmann. “A polynomialtime algorithm for the computation of the iteration-period bound in recursive dataflow graphs”. In:
IEEE Transactions on Circuits and Systems I: Fundamental Theoryand Applications
Numerische Mathematik
SIAMJournal on Computing
Networks
An Algorithmfor the Tramp Steamer Problem Based on Mean-Weight Cycles . Tech. rep. MIT/LCS/TM-457. Massachusetts Institute of Technology, 1991 (cit. on p. 3).[IP95] Kazuhito Ito and Keshab K. Parhi. “Determining the minimum iteration period ofan algorithm”. In:
VLSI Signal Processing
Journal of Computer and System Sciences
Journal of the ACM
Discrete Mathematics
Journal of Algorithms
Theory ofGraphs . Ed. by P. Rosenstiehl. Dunod, Paris, Gordon, and Breach, New York, 1967,pp. 209–214 (cit. on pp. 2, 3).[Law76] Eugene L. Lawler.
Combinatorial Optimization: Network and Matroids . Holt, Rinehartand Winston, New York, 1976 (cit. on pp. 3, 5).[Meg83] Nimrod Megiddo. “Applying Parallel Computation Algorithms in the Design ofSerial Algorithms”. In:
Journal of the ACM +
15] Gary L. Miller, Richard Peng, Adrian Vladu, and Shen Chen Xu. “Improved ParallelAlgorithms for Spanners and Hopsets”. In:
Symposium on Parallelism in Algorithmsand Architectures (SPAA) . 2015, pp. 192–201 (cit. on p. 3).[Moo59] E. F. Moore. “The Shortest Path Through a Maze”. In:
International Symposium onthe Theory of Switching . 1959, pp. 285–292 (cit. on pp. 3, 6).[MS03] Ulrich Meyer and Peter Sanders. “ ∆ -stepping: a parallelizable shortest path algo-rithm”. In: Journal of Algorithms O ( nm ) time algorithm for finding the min length directedcycle in a weighted graph”. In: Symposium on Discrete Algorithms (SODA) . 2017,pp. 1866–1879 (cit. on p. 11).[Pap79] Christos H. Papadimitriou. “Efficient Search for Rationals”. In:
Information ProcessingLetters
EuropeanSymposium on Algorithms (ESA) . 2005, pp. 770–778 (cit. on p. 3).[Spe97] Thomas H. Spencer. “Time-work tradeoffs for parallel algorithms”. In:
Journal of theACM
Journal of Algorithms
Symposium on Theory of Computing (STOC) . 2005, pp. 112–119 (cit. on p. 6).[UY91] Jeffrey D. Ullman and Mihalis Yannakakis. “High-Probability Parallel Transitive-Closure Algorithms”. In:
SIAM Journal on Computing
Symposiumon Theory of Computing (STOC) . 2014, pp. 664–673 (cit. on pp. 3, 14, 15).[YZ05] Raphael Yuster and Uri Zwick. “Answering distance queries in directed graphsusing fast matrix multiplication”. In: