A Deterministic Algorithm for Balanced Cut with Applications to Dynamic Connectivity, Flows, and Beyond
Julia Chuzhoy, Yu Gao, Jason Li, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak
aa r X i v : . [ c s . D S ] M a y A Deterministic Algorithm for Balanced Cutwith Applications to Dynamic Connectivity, Flows, and Beyond
Julia ChuzhoyTTIC Yu GaoGeorgia Tech ∗ Jason LiCMU † Danupon NanongkaiKTHRichard PengGeorgia Tech ∗ Thatchaphol SaranurakTTICMay 5, 2020
We consider the classical Minimum Balanced Cut problem: given a graph G , compute a partitionof its vertices into two subsets of roughly equal volume, while minimizing the number of edgesconnecting the subsets. We present the first deterministic, almost-linear time approximation al-gorithm for this problem. Specifically, our algorithm, given an n -vertex m -edge graph G and anyparameter 1 ≤ r ≤ O (log n ), computes a (log m ) r -approximation for Minimum Balanced Cut on G , in time O (cid:16) m O (1 /r )+ o (1) · (log m ) O ( r ) (cid:17) . In particular, we obtain a (log m ) /ǫ -approximationin time m O (1 / √ ǫ ) for any constant ǫ , and a (log m ) f ( m ) -approximation in time m o (1) , for anyslowly growing function m . We obtain deterministic algorithms with similar guarantees for theSparsest Cut and the Lowest-Conductance Cut problems.Our algorithm for the Minimum Balanced Cut problem in fact provides a stronger guarantee: iteither returns a balanced cut whose value is close to a given target value, or it certifies that sucha cut does not exist by exhibiting a large subgraph of G that has high conductance. We use thisalgorithm to obtain deterministic algorithms for dynamic connectivity and minimum spanningforest, whose worst-case update time on an n -vertex graph is n o (1) , thus resolving a major openproblem in the area of dynamic graph algorithms. Our work also implies deterministic algorithmsfor a host of additional problems, whose time complexities match, up to subpolynomial in n factors, those of known randomized algorithms. The implications include almost-linear timedeterministic algorithms for solving Laplacian systems and for approximating maximum flowsin undirected graphs. These results were obtained independently by Chuzhoy, and the group consisting of Gao, Li, Nanongkai,Peng, and Saranurak. Chronologically, Gao et al. obtained their result in July 2019, while Chuzhoy’s result wasobtained in September 2019, but there was no communication between the groups until early October 2019. ∗ part of this work was done while visiting MSR Redmond † supported in part by NSF award CCF-1907820 ontents q = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.2 Step: q > BalCutPrune BalCutPrune Faster Algorithm for
BalCutPrune in Low Conductance Regime 58
Introduction
In the classical
Minimum Balanced Cut problem, the input is an n -vertex graph G = ( V, E ), andthe goal is to compute a partition of V into two subsets A and B with Vol G ( A ) , Vol G ( B ) ≥ Vol( G ) /
3, while minimizing | E G ( A, B ) | ; here, E G ( A, B ) is the set of all edges with one endpointin A and another in B , and, for a set S of vertices of G , Vol G ( S ) denotes its volume – thesum of the degrees of all vertices of S . Lastly, Vol( G ) = Vol G ( V ) is the total volume of thegraph. The Minimum Balanced Cut problem is closely related to the
Minimum-Conductance Cut problem, where the goal is to compute a subset S of vertices of minimum conductance , defined as | E G ( S, V \ S ) | / min { Vol G ( S ) , Vol G ( V \ S ) } , and to the Sparsest Cut problem, where the goal is tocompute a subset S of vertices of minimum sparsity : | E G ( S, V \ S ) | / min {| S | , | V \ S |} . While allthree problems are known to be NP-hard, approximation algorithms for them are among the mostcentral and widely used tools in algorithm design, especially due to their natural connectionsto the hierarchical divide-and-conquer paradigm [R¨ac02, ST04, Tre05, AHK10, RST14, KT19,NSW17]. We note that approximation algorithms for Minimum Balanced Cut often consider arelaxed (or a bi-criteria) version, where we only require that the solution (
A, B ) returned by thealgorithm satisfies Vol G ( A ) , Vol G ( B ) ≥ Vol( G ) /
4, but the solution value is compared to that ofthe optimal balanced cut.The first approximation algorithm for
Minimum Balanced Cut , whose running time is near-linear in the graph size, was developed in the seminal work of Spielman and Teng [ST04].This algorithm was used in [ST04] in order to decompose a given graph into a collection of“near-expanders”, which are then exploited in order to construct spectral sparsifiers, eventuallyleading to an algorithm for solving systems of linear equations in near-linear time. Algorithmsfor
Minimum Balanced Cut also served as crucial building blocks in the more recent breakthroughresults that designed near- and almost-linear time approximation algorithms for a large class offlow and regression problems [She13, KLOS14, Pen16, KPSW19] and faster exact algorithms formaximum flow, shortest paths with negative weights, and minimum-cost flow [CMSV17, Mad16].Spielman and Teng’s expander decomposition was later strengthened by Nanongkai, Saranurakand Wulff-Nilsen [NSW17, Wul17, NS17], who used it to obtain algorithms for the dynamicminimum spanning forest problem with improved worst-case update time. The fastest currentalgorithm for computing expander decompositions is due to Saranurak and Wang [SW19]; asimilar decomposition was recently used by Chuzhoy and Khanna [CK19] in their algorithm forthe decremental single-source shortest paths problem, that in turn led to a faster algorithm forapproximate vertex-capacitated maximum flow.Unfortunately, all algorithms mentioned above are randomized . This is mainly because allexisting almost- and near-linear time algorithms for Minimum Balanced Cut are randomized[ST04, KRV09]. A fundamental open question in this area is then: is there a deterministic algo-rithm for
Minimum Balanced Cut with similar performance guarantees? Resolving this questionsseems a key step to obtaining fast deterministic algorithms for all aforementioned problems, andto resolving one of the most prominent open problems in the area of dynamic graph algorithms,namely, whether there is a deterministic algorithm for
Dynamic Connectivity , whose worst-case We informally say that an algorithm runs in near-linear time, if its running time is O ( m · poly log n ), where m and n are the number of edges and vertices in the input graph, respectively. We say that the running time isalmost-linear, if it is bounded by m o (1) . O ( √ n ) bound of Frederickson [Fre85, EGIN97] by afactor that is polynomial in n .The best previous published bound on the running time of a determinsitic algorithm for MinimumBalanced Cut is O ( mn ) [ACL07]. A recent manuscript by a subset of the authors, together withYingchareonthawornchai [GLN + n n ω + o (1) , m . o (1) o , where ω < .
372 is the matrix multiplication exponent, and n and m are the number of nodes andedges of the input graph, respectively. This algorithm is used in [GLN +
19] to obtain fasterdeterministic algorithms for the vertex connectivity problem. However, the running time of thealgorithm of [GLN +
19] for
Minimum Balanced Cut is somewhat slow, and it just falls short ofbreaking the O ( √ n ) worst-case update time bound for Dynamic Connectivity . We present a deterministic (bi-criteria) algorithm for
Minimum Balanced Cut that, for anyparameter r = O (log n ), achieves an approximation factor α ( r ) = (log m ) r in time T ( r ) = O (cid:16) m O (1 /r )+ o (1) · (log m ) O ( r ) (cid:17) , where n and m are the number of vertices and edges in theinput graph, respectively. In particular, for any constant ǫ , the algorithm achieves (log m ) /ǫ -approximation in time O (cid:16) m O (1 / √ ǫ ) (cid:17) . For any slowly growing function f ( m ) (for example, f ( m ) = log log m or f ( m ) = log ∗ m ), it achieves (log m ) f ( m ) -approximation in time m o (1) .In fact our algorithm provides somewhat stronger guarantees: it either computes an almostbalanced cut whose value is within an α ( r ) factor of a given target value η ; or it certifies thatevery balanced cut in G has value Ω( η ), by producing a large sub-graph of G that has a largeconductance. This algorithm implies fast deterministic algorithms for all the above mentionedproblems, including, in particular, improved worst-case update time guarantees for (undirected) Dynamic Connectivity and
Minimum Spanning Forest .In order to provide more details on our results and techniques, we need to introduce somenotation. Throughout, we assume that we are given an m -edge, n -node undirected graph,denoted by G = ( V, E ). A cut in G is a partition ( A, B ) of V into two non-empty subsets;abusing the notation, we will also refer to subsets S of vertices with S = ∅ , V as cuts, meaningthe partition ( S, V \ S ) of V . The conductance of a cut S in G , that was already mentionedabove, is defined as: Φ G ( S ) := | E G ( S, V \ S ) | min { Vol G ( S ) , Vol G ( V \ S ) } , and the conductance of a graph G , that we denote by Φ( G ), is the smallest conductance of anycut S of G : Φ ( G ) := min S ( V : S = ∅ { Φ G ( S ) } . A notion that is closely related to conductance is that of sparsity. The sparsity of a cut S in G is: Ψ G ( S ) := | E G ( S,V \ S ) | min {| S | , | V \ S |} , and the expansion of the graph G is the minimum sparsity of anycut S in G : Ψ ( G ) := min S ( V : S = ∅ { Ψ G ( S ) } . We say that a cut S is balanced if Vol G ( S ) , Vol G ( V \ S ) ≥ Vol( G ) /
3. The main tool that weuse in our approximation algorithm for the
Minimum Balanced Cut problem is the
BalCutPrune problem, that is defined next. Informally, the problem seeks to either find a low-conductancebalanced cut in a given graph, or to produce a certificate that every balanced cut has a high2onductance, by exhibiting a large sub-graph of G that has a high conductance. Definition 1.1 ( BalCutPrune problem) . The input to the α -approximate BalCutPrune problemis a graph G = ( V, E ), a conductance parameter 0 < φ ≤
1, and an approximation factor α .The goal is to compute a cut ( A, B ) in G , with | E G ( A, B ) | ≤ αφ · Vol( G ), such that one of thefollowing hold: either1. (Cut) Vol G ( A ) , Vol G ( B ) ≥ Vol( G ) /
3; or2. (Prune)
Vol G ( A ) ≥ Vol( G ) /
2, and graph G [ A ] has conductance at least φ .Our main technical result is the following. Theorem 1.2 (Main Result) . There is a deterministic algorithm, that, given a graph G with m edges, and parameters φ ∈ (0 , , ≤ r ≤ O (log n ) , and α = (log m ) r , computes a solution to the α -approximate BalCutPrune problem instance ( G, φ ) in time O (cid:16) m O (1 /r )+ o (1) · (log m ) O ( r ) (cid:17) . In particular, by letting r be a large constant, we obtain a (log m ) /ǫ -approximation in time m O (1 / √ ǫ ) for any constant ǫ , and by letting f ( m ) be any slowly growing function (for ex-ample, f ( m ) = log log m or f ( m ) = log ∗ m ), and setting r = p f ( m ), we obtain (log m ) f ( m ) -approximation in time m o (1) .The algorithm from Theorem 1.2 immediately implies a deterministic bi-criteria factor-(log n ) r -approximation algorithm for Minimum Balanced Cut , with running time O (cid:16) m O (1 /r )+ o (1) · (log m ) O ( r ) (cid:17) for any value of r ≤ O (log m ). Indeed, suppose we are given any conductance parameter0 < φ < G = ( V, E ). We apply the algorithm from Theorem 1.2 tograph G with the parameter φ , obtaining a cut ( A, B ) in G , with | E G ( A, B ) | ≤ αφ · Vol( G ).If Vol G ( A ) , Vol G ( B ) ≥ Vol( G ) /
4, then we obtain an (almost) balanced cut (
A, B ) of conduc-tance at most αφ . Otherwise, we are guaranteed that Vol G ( A ) ≥ G ) /
4, and graph G [ A ]has conductance at least φ . We claim that in this case, for any balanced cut ( A ′ , B ′ ) in G , | E G ( A ′ , B ′ ) | ≥ Ω( φ · Vol( G )) holds. This is because any such partition ( A ′ , B ′ ) of V defines apartition ( X, Y ) of A , with Vol G ( X ) , Vol G ( Y ) ≥ Ω(Vol( G )), and, since Φ( G [ A ]) ≥ φ , we getthat | E G ( X, Y ) | ≥ Ω( φ · Vol( G )). Therefore, we obtain the following corollary. Corollary 1.3.
There is an algorithm that, given an n -vertex m -edge graph G , a target value η and a parameter r ≤ O (log n ) , either returns a partition ( A, B ) of V ( G ) with Vol G ( A ) , Vol G ( B ) ≥ Vol( G ) / and | E G ( A, B ) | ≤ α ( r ) · η , for α ( r ) = (log m ) r , or it certifies that for any partition ( A, B ) of V ( G ) with Vol G ( A ) , Vol G ( B ) ≥ Vol( G ) / , | E G ( A, B ) | > η must hold. The runningtime of the algorithm is O (cid:16) m O (1 /r )+ o (1) · (log m ) O ( r ) (cid:17) . Algorithms for
Minimum Balanced Cut often differ in the type of certificate that they providewhen the value of the
Minimum Balanced Cut is greater than the given threshold (that correspondsto the (Prune) case in Definition 1.1). The original near-linear time algorithm of Spielman andTeng [ST04] outputs a set S of nodes of small volume, with the guarantee that for some subset S ′ ⊆ S , the graph G − S ′ has high conductance. This guarantee, however, is not sufficient forseveral applications. A version that was found to be more useful in several recent applications,3uch as e.g. Dynamic Connectivity [SW19, NSW17, Wul17, NS17], is somewhat similar to thatin the definition of
BalCutPrune , but with a somewhat stronger guarantee in the (Prune) case .The approximation factor α of Spielman and Teng’s algorithm [ST04] depends on the parameter φ , and its time complexity depends on both φ and α . Several subsequent papers have improvedthe approximation factor or the time complexity of their algorithm e.g. [KRV09, ACL07, OV11,OSV12, Mad10b]; we do not discuss these results here since they are not directly related to thiswork. An immediate consequence of our results is deterministic algorithms for the Sparsest Cut andthe Lowest-Conductance Cut problems, summarized in the next theorem.
Theorem 1.4.
There is a deterministic algorithm, that, given an n -vertex and m -edge graph G , and a parameter r ≤ O (log n ) , computes a (log n ) r -approximate solution for the SparestCut problem on G , in time O (cid:16) m O (1 /r )+ o (1) · (log n ) O ( r ) (cid:17) . Similarly, there is a deterministicalgorithm that achieves similar performance guarantees for the Lowest-Conductance Cut problem. We note that the best current deterministic approximation algorithm for both Sparsest Cutand Lowest-Conductance Cut, due to Arora, Rao and Vazirani [ARV09], achieves an O ( √ log n )-approximation. Unfortunately, the algorithm has a large (but polynomially bounded) runningtime, since it needs to solve an SDP deterministically. There are faster O (log n )-approximationdeterministic algorithms for both the Sparsest Cut and the Lowest-Conductance problems, withrunning time ˜ O ( m ), that are based on the Multiplicative Weights Update framework [Fle00,Kar08]. If we allow the approximation ratio to depend on φ , where φ is the value of theoptimal solution, then there are several algorithms that are based on spectral approach for bothproblems. The algorithm of [Alo86] computes a cut with conductance at most O ( φ / ) in time˜ O ( n ω ). Using Personalized PageRank algorithm [ACL07], a cut of conductance at most O ( φ / )can found in time ˜ O ( mn ). Recently, Gao et. al [GLN +
19] provided an algorithm to compute acut of conductance at most φ / n o (1) , in time O ( m . o (1) ). Additionally, we obtain faster deterministic algorithms for a number of other cut and flow prob-lems; the performance of our algorithms matches that of the best current randomized algorithms,to within factor n o (1) . We summarize these bounds in Table 1 and Table 2; see Section 6 andSection 7.4 for a more detailed discussion. We now turn to discuss the implications of our resultsto the Dynamic Connectivity problem, which was one of the main motivations of this work.In the most basic version of the
Dynamic Connectivity problem, we are given a graph G thatundergoes edge deletions and insertions, and the goal is to maintain the information of whether G is connected. The Dynamic Connectivity problem and its generalizations – dynamic SpanningForest ( SF ) and dynamic Minimum Spanning Forest ( MSF ) – have played a central role inthe development of the area of dynamic graph algorithms for over three decades (see, e.g.,[NS17, NSW17] for further discussions). To be precise, that version requires that | E G ( A, B ) | ≤ αφ · Vol G ( B ), which is somewhat stronger than ourrequirement that | E G ( A, B ) | ≤ α · φ · Vol( G ). But for all applications we consider, our guarantee still suffices,possibly because the two guarantees are essentially the same when the cut ( A, B ) is balanced. The last two algorithms, in fact, provide additional guarantees regarding the balance of the returned cut. roblem Best previous runningtime: deterministic Best previousrunning time:randomized Our results:deterministic(1 + ǫ )-approximateundirectedmax-flow/min-cut ˜ O ( m min { m / , n / } )[GR98] (exact) ˜ O ( mǫ − ) [She17,KLOS14, Mad10b] b O ( mǫ − ):Corollary 6.10 n o (1) -approximate sparsestcut b O ( m . ) [GLN +
19] ˜ O ( m )[KRV09, She13] b O ( m ): Theorem 1.4 n o (1) -approximatelowest-conductance cut b O ( m . ) [GLN +
19] ˜ O ( m )[KRV09, She13] b O ( m ): Theorem 1.4Expander decomposition(conductance φ ) b O ( m . ) [GLN +
19] ˜ O ( m/φ ) [SW19] b O ( m ) [NS17, Wul17] b O ( m ) Corollary 7.7Congestion approximator Ω( m ) ˜ O ( m ) [Mad10b,She13, KLOS14] b O ( m ) Lemma 6.12Spectral sparsifiers O ( mn ǫ − )[BSS12, dCSHS16] ˜ O ( mǫ − )[ST11, LS17] n o (1) -approximation,time b O ( m ):Corollary 6.4Laplacian solvers ˜ O ( m . log(1 /ǫ ))[ST03] ˜ O ( m log(1 /ǫ ))[ST14] b O ( m log(1 /ǫ ))Corollary 6.9 Table 1: Applications of our results to static graph problems. As usual, n and m denote thenumber of nodes and edges of the input graph, respectively. We use ˜ O and b O notation to hidepolylog n and n o (1) factors respectively. For readability, we assume that the weights and thecapacities of edges are polynomial in the problem size. Dyanmic Problem Best previous worst-caseupdate time:deterministic Best previousworst-case update time:randomized Our results:deterministicConnectivity O ( √ n ) [Fre85, EGIN97] O ( √ n · log log n √ log n )[KKPT16] O (log n )[KKM13, GKKT15] n o (1) Corollary 6.2Minimum SpanningForest O ( √ n ) [Fre85, EGIN97] n o (1) [NSW17] n o (1) Corollary 6.2
Table 2: Applications of our results to dynamic (undirected) graph problems. As before, n and m denote the number of vertices and edges of the input graph, respectively. For readability, weassume that the weights and the capacities of edges/nodes are polynomial in problem size.5n important measure of the performance of a dynamic algorithm is its update time – theamount of time that is needed in order to process each update (an insertion or a deletion ofan edge). We distinguish between amortized update time , that upper-bounds the average timethat the algorithm spends on each update, and worst-case update time , that upper-bounds thelargest amount of time that the algorithm ever spends on a single update.The first non-trivial algorithm for the Dynamic Connectivity problem dates back to Frederickson’swork from 1985 [Fre85], that provided a deterministic algorithm with O ( √ m ) worst-case updatetime. Combining this algorithm with the sparsification technique of Eppstein et al. [EGIN97]yields a deterministic algorithm for Dynamic Connectivity with O ( √ n ) worst-case update time.Improving and refining this bound has been an active research direction in the past three decades,but unfortunately, practically all follow-up results require either randomization or amortization .We now provide a summary of these results. • (Amortized & Randomized) In their 1995 breakthrough paper, Henzinger and King[HK99] greatly improve the O ( √ n ) worst-case update bound with a randomized Las Vegasalgorithm, whose expected amortized update time is poly log( n ). This result has been sub-sequently improved, and the current best randomized algorithms have amortized updatetime that almost matches existing lower bounds, to within O ((log log n ) ) factors; see, e.g.,[HHKP17, Tho00, HT97, PD06]. • (Amortized & Deterministic) Henzinger and King’s 1997 deterministic algorithm[HK97] achieves an amortized update time of O ( n / log n ). This was later substantiallyimproved to O (log n ) amortized update time by the deterministic algorithm of Holm,de Lichtenberg, and Thorup [HdLT01]; this update time was in turn later improved to O (log ( n ) / log log n ) by Wulff-Nilsen [Wul13]. • (Worst-Case & Randomized) The first improvement over the O ( √ n ) worst-case updatebound was due to Kapron, King and Mountjoy [KKM13], who provided a randomizedMonte Carlo algorithm with worst-case update time O (log n ). This bound was laterimproved to O (log n ) by Gibb et al. [GKKT15]. Subsequently, Nanongkai, Saranurak,and Wulff-Nilsen [NSW17, Wul17, NS17] presented a Las Vegas algorithm for the moregeneral dynamic MSF problem with n o (1) worst-case update time.A major open problem that was raised repeatedly (see, e.g., [KKM13, PT07, KKPT16, Kin16,Kin08, HdLT01, Wul17]) is: can we achieve an O ( n / − ǫ ) worst-case update time with a deter-ministic algorithm? The only progress so far on this question is the deterministic algorithmof Kejlberg-Rasmussen et al. [KKPT16], that slightly improves the O ( √ n ) worst-case updatetime bound to O ( p n (log log n ) / log n ) using word-parallelism. In this paper, we resolve thisquestion in the affirmative, and provide a somewhat stronger result, that holds for the moregeneral dynamic MSF problem:
Theorem 1.5.
There are deterministic algorithms for
Dynamic Connectivity and
Dynamic MSF ,with n o (1) worst-case update time. In order to obtain this result, we use the algorithm of Nanongkai, Saranurak, and Wulff-Nilsen[NSW17] for dynamic
MSF . The only randomized component of their algorithm is the computa-tion of an expander decomposition of a given graph. Since our results provide a fast deterministicalgorithm for computing expander decomposition, we achieve the same n o (1) worst-case updatetime as in [NSW17] via a deterministic algorithm.6 .3 Techniques Our algorithm for the proof of Theorem 1.2 is based on the cut-matching game framework thatwas introduced by Khandekar, Rao and Vazirani [KRV09], and has been used in numerousalgorithms for computing sparse cuts [KRV09, NS17, SW19, GLN +
19] and beyond (e.g. [CC13,RST14, CC16, CL16]). Intuitively, the cut-matching game consists of two algorithms: onealgorithm, called the cut player , needs to compute a balanced cut of a given graph that has asmall value, if such a cut exists. The second algorithm, called the matching player , needs tosolve (possibly approximately) a single-commodity maximum flow / minimum cut problem. Acombination of these two algorithms is then used in order to compute a sparse cut in the inputgraph, or to certify that no such cut exists. Unfortunately, all current algorithms for the cutplayer are randomized. Our main technical contribution is an efficient deterministic algorithmthat implements the cut player. The algorithm itself is recursive, and proceeds by recursivelyrunning many cut-matching games in parallel, on much smaller graphs. This requires us to adaptthe algorithm of the matching player, so that it solves a somewhat harder multi-commodity flowproblem. We now provide more details on the cut-matching game and on our implementationof it.
Overview of the Cut-Matching Game.
We start with a high-level overview of a variantof the cut-matching game, due to Khandekar et al. [KKOV07]. We say that a graph W isa ψ - expander if it has no cut of sparsity less than ψ . We will informally say that W is an expander if it is a ψ -expander for some ψ = 1 /n o (1) . Given a graph G = ( V, E ), the goal of thecut-matching game is to either find a balanced and sparse cut in G , or to embed an expander W = ( V, E ′ ) (called a witness ) into G ; note that W and G are defined over the same vertexset. The embedding of W into G needs to map every edge e of W to a path P e in G connectingthe endpoints of e . The congestion of this embedding is the maximum number of paths in { P e | e ∈ E ( W ) } that share a single edge of G . We require that the congestion of the resultingembedding is low. Such an embedding serves as a certificate that there is no sparse balancedcut in G . This follows from the fact that, if W is a ψ -expander, and it has a low-congestionembedding into another graph G , then G itself is a ψ ′ -expander, where ψ ′ depends on ψ andon the congestion of the embedding. The algorithm proceeds via an interaction between twoalgorithms, the cut player, and the matching player, and consists of O (log n ) rounds .At the beginning of every round, we are given a graph W whose vertex set is V , and its embeddinginto G ; at the beginning of the first round, W contains the set V of vertices and no edges. Inevery round, the cut player either:(C1) “cuts W ”, by finding a balanced sparse cut S in W ; or(C2) “certifies W ” by announcing that W is an expander.If W is certified (Item (C2)), then we have constructed the desired embedding of an expanderinto G , so we can terminate the algorithm and certify that G has no balanced sparse cut. If acut S is found in W (Item (C1)), then we invoke the matching player, who either:(M1) “matches W ”, by adding to W a large matching M ⊆ S × ( V \ S ) that can be embeddedinto G with low congestion; or 7M2) “cuts G ”, by finding a balanced sparse cut T in G (the cut T is intuitively what preventsthe matching player from embedding a large matching M ⊆ S × ( V \ S ) into G ).If a sparse balanced cut T is found in graph G (Item (M2)), then we return this cut and terminatethe algorithm. Otherwise, the game continues to the next round. It was shown in [KKOV07]that the algorithm must terminate after Θ(log n ) rounds.In the original cut-matching game by Khandekar, Rao and Vazirani [KRV09], the matchingplayer was implemented by an algorithm that computes a single-commodity maximum flow /minimum cut. The algorithm for the cut player was defined somewhat differently, in that inthe case of Item (C1), the cut that it produced was not necessarily sparse, but it still had someuseful properties, which guaranteed that the algorithm terminates after O (log n ) iterations. Inorder to implement the cut player, the algorithm of [KRV09] (implicitly) considers n vectors ofdimension n each, that represent the probability distributions of random walks on the witnessgraph, starting from different vertices of G , and then uses a random projection of these vectorsin order to construct the balanced cut. The algorithm exploits the properties of the witnessgraph in order to compute these projections efficiently, without explicitly constructing thesevectors, which would be too time consuming. Previous work (see, e.g., [SW19, CK19]) impliesthat one can use algorithms for computing maximal flows instead of maximum flows in orderto implement the matching player in near-linear time deterministically, if the target parameters1 /φ, α ≤ n o (1) . This still left open the question: can the cut player be implemented via adeterministic and efficient algorithm? A natural strategy for derandomizing the algorithm of [KRV09] for the cut player is to avoid therandom projection of the vectors. In a previous work of a subset of the authors with Yingchare-onthawornchai [GLN + n ): if we cannot use random projections,then we need to deal with n vectors of dimension n each when implementing the cut player, andso the running time of Ω( n ) seems inevitable. In this paper, we implement the cut player ina completely different way from the previously used approaches, by solving the balanced sparsecut problem recursively.We start by observing that, in order to implement the cut player via the approach of [KKOV07],it is sufficient to provide an algorithm for computing a balanced sparse cut on the witness graph W ; in fact, it is not hard to see that it is sufficient to solve this problem approximately. However,this leads us to a chicken-and-egg situation, where, in order to solve the Minimum Balanced Cut problem on the input graph G , we need to solve the Minimum Balanced Cut problem on thewitness graph W . While graph W is guaranteed to be quite sparse (with maximum vertexdegree O (log n )), it is not clear that solving the Minimum Balanced Cut problem on this graphis much easier.This motivates our recursive approach, in which, in order to solve the
Minimum Balanced Cut problem on the witness graph W , we run a large number of cut-matching games in it simul-taneously, each of which has a separate witness graph, containing significantly fewer vertices.It is then sufficient to solve the Minimum Balanced Cut problem on each of the resulting, muchsmaller, witness graphs. We prove the following theorem that provides a deterministic algorithm8or the cut player via this recursive approach.
Theorem 1.6.
There is an universal constant N , and a deterministic algorithm, that wecall CutOrCertify , that, given an n -vertex graph G = ( V, E ) with maximum vertex degree O (log n ) , and a parameter r ≥ , such that n /r ≥ N , returns one of the following: • either a cut ( A, B ) in G with | A | , | B | ≥ n/ and | E G ( A, B ) | ≤ n/ ; or • a subset S ⊆ V of at least n/ vertices, such that Ψ( G [ S ]) ≥ / log O ( r ) n .The running time of the algorithm is O (cid:16) n O (1 /r ) · (log n ) O ( r ) (cid:17) . We note that a somewhat similar recursive approach was used before, e.g., in Madry’s construc-tion of j -trees [Mad10a], and in the recursive construction of short cycle decompositions [CGP + +
19] use Madry’s j -trees to solve Minimum Balanced Cut by running cut-matching games on graphs containing fewer and fewer nodes, obtaining an ( m . o (1) )-timealgorithm. Unfortunately, improving this bound further does not seem viable via this approach,since the total number of edges contained in the graphs that belong to deeper recursive levels isvery large. Specifically, assume that we are given an n -node graph G with m edges, together witha parameter k ≥
1. We can then use the j -trees in order to reduce the problem of computing Minimum Balanced Cut on G to the problem of computing Minimum Balanced Cut on k graphs,each of which contains roughly n/k nodes. Unfortunately, each of these graphs may have Ω( m )edges. Therefore, the total number of edges in all resulting graphs may be as large as Ω( mk ),which is one of the major obstacles to obtaining faster algorithms for Minimum Balanced Cut using j -trees.We now provide a more detailed description of the new recursive strategy that we use in orderto prove Theorem 1.6. New Recursive Strategy.
We partition the vertices of the input n -vertex graph G into k subsets V , V , . . . , V k of roughly equal cardinality, for a large enough parameter k (for example, k = n o (1) ). The algorithm consists of two stages. In the first stage, we attempt to construct k expander graphs W , . . . , W k , where V ( W i ) = V i for all 1 ≤ i ≤ k , and embed them into thegraph G simultaneously. If we fail to do so, then we will compute a sparse balanced cut in G .In order to do so, we run k cut-matching games in parallel. Specifically, we start with everygraph W i containing the set V i of vertices and no edges, and then perform O (log n ) iterations.In every iteration, we run the CutOrCertify algorithm on each graph W , . . . , W k in parallel.Assume that for all 1 ≤ i ≤ k , the algorithm returns a sparse balanced cut ( A i , B i ) in W i .We then use an algorithm of the matching player, that either computes, for each 1 ≤ i ≤ k ,a matching M i between vertices of A i and B i , and computes a low-congestion embedding ofall matchings M , . . . , M k into graph G simultaneously, or it returns a sparse balanced cut in G . In the former case, we augment each graph W i by adding the set M i of edges to it. Inthe latter case, we terminate the algorithm and return the sparse balanced cut in graph G asthe algorithm’s output. If the algorithm never terminates with a sparse balanced cut, then weare guaranteed that, after O (log n ) iterations, the graphs W , . . . , W k are all expanders (moreprecisely, each of these graphs contains a large enough expander, but we ignore this technicality9n this informal overview), and moreover, we obtain a low-congestion embedding of the disjointunion of these graphs into G . Note that, in order to execute this stage, we recursively applyalgorithm CutOrCertify to k graphs, whose sizes are significantly smaller than the size of thegraph G .In the second stage, we attempt to construct a single expander graph W ∗ on the set { v , . . . , v k } of vertices, where for each 1 ≤ i ≤ k , we view vertex v i as representing the set V i of vertices of G . We also attempt to embed the graph W ∗ into G , where every edge e = ( v i , v j ) is embeddedinto Ω( n/k ) paths connecting vertices of V i to vertices of V j . In order to do so, we start withthe graph W ∗ containing the set { v , . . . , v k } of vertices and no edges and then iterate. In everyiteration, we run algorithm CutOrCertify on the current graph W ∗ , obtaining a partition( A, B ) of its vertices. We then use an algorithm of the matching player in order to computea matching M between vertices of A and vertices of B , and to embed every edge ( v i , v j ) ∈ M of the matching into Ω( n/k ) paths connecting vertices of V i to vertices of V j in graph G , withlow congestion. If we do not succeed in computing the matching and the embedding, then thealgorithm of the matching player returns a sparse balanced cut in graph G . We then terminatethe algorithm and return this cut as the algorithm’s output. Otherwise, we add the edges of M to graph W ∗ and continue to the next iteration. The algorithm terminates once graph W ∗ is anexpander, which must happen after O (log n ) iterations.Lastly, we compose the expanders W , . . . , W k and W ∗ in order to obtain an expander graph ˆ W that embeds into G with low congestion; the embedding is obtained by combining the embeddingsof the graphs W , . . . , W k and the embedding of graph W ∗ . This serves as a certificate that G is an expander graph.Note that the algorithm for the matching player that we need to use differs from the standard onein that it needs to compute k different matchings between k different pre-specified pairs of vertexsubsets. Specifically, the algorithm for the matching player is given k pairs ( A , B ) , . . . , ( A k , B k )of subsets of vertices of G of equal cardinality. Ideally, we would like the algorithm to either(i) compute, for all 1 ≤ i ≤ k , a perfect matching M i between vertices of A i and vertices of B i , and embed all edges of M ∪ · · · ∪ M k into G simultaneously with low congestion; or (ii)compute a sparse balanced cut in G . In fact our algorithm for the matching player achieves asomewhat weaker objective: namely, the matchings M i are not necessarily perfect matchings,but they are sufficiently large. In order to overcome this difficulty, we introduce “fake” edgesthat augment each matching M i to a perfect matching. As a result, if the algorithm fails tocompute a sparse balanced cut in G , then we are only guaranteed that G ∪ F is an expander,where F is (a relatively small) set of fake edges. We then use a known “expander trimming”algorithm of [SW19] in order to find a large subset S ⊆ V ( G ) of vertices, such that G [ S ] is anexpander, and the cut S is sufficiently sparse. We note that the notion of fake edges was usedbefore in the context of the cut-matching game, e.g. in [KRV09].The algorithm of the matching player builds on the idea of Chuzhoy and Khanna [CK19] ofcomputing maximal sets of short edge-disjoint paths, which can be implemented efficiently viaEven-Shiloach’s algorithm for decremental single-source shortest paths [ES81]. Unfortunately,this approach requires slightly slower running time of O (cid:16) m O (1 /r ) · (log m ) O ( r ) /φ (cid:17) , intro-ducing a quadratic dependence on 1 /φ , where φ is the conductance parameter. The expandertrimming algorithm of [SW19] that is exploited by the cut player also unfortunately introducesa linear dependence on 1 /φ . As a result, we obtain an algorithm for the BalCutPrune problem10hat is sufficiently fast in the high-conductance regime, that is, where φ = 1 / poly log n , but istoo slow for the setting where the parameter φ is low. Luckily, the high-conductance regimeis sufficient for many of our applications, and in particular it allows us to obtain efficient ap-proximation algorithms for maximum flow. This algorithm can then in turn be used in order toimplement the matching player, even in the low-conductance regime, removing the dependenceof the algorithm’s running time on φ . Additional difficulty for the low-conductance regimeis that we can no longer afford to use the expander trimming algorithm of [SW19]. Instead,we provide an efficient deterministic bi-criteria approximation algorithm for the most-balancedsparest cut problem, and use this algorithm in order to solve the BalCutPrune problem in thelow-conductance regime. This part closely follows ideas of [NS17, Wul17, CK19, CS19].
We start with preliminaries in Section 2. In Section 3, we define the problem to be solved by thenew matching player, and provide an algorithm for solving it. We also provide a faster algorithmthe case where k = 1 (that is, the problem of the standard matching player), which we exploitlater. We prove our main technical result, Theorem 1.6, in Section 4, obtaining the algorithm forthe cut player. In Section 5, we obtain a proof of Theorem 1.2 with slightly weaker guarantees,where the running time depends linearly on 1 /φ . In Section 6, we use our result from Section 5to obtain algorithms for most of our applications. Finally, in Section 7 we complete the proofof Theorem 1.2, and provide some additional applications of our results for low-conductanceregime, including the proof of Theorem 1.4. We conclude with open problems in Section 8. All graphs considered in this paper are unweighted and undirected, and they may have paralleledges but no self-loops. Given a graph G = ( V, E ), for every vertex v ∈ V , we denote by deg G ( v )the degree of v in G . For any set S ⊆ V of vertices of G , the volume of S is the sum of degreesof all nodes in S : Vol G ( S ) = P v ∈ S deg G ( v ). We denote the total volume of the graph G byVol( G ) = Vol G ( V ). Notice that Vol( G ) = 2 | E | .We use standard graph theoretic notation: for two subsets A, B ⊆ V of vertices of G , we denoteby E G ( A, B ) the set of all edges with one endpoint in A and another in B . Assume now thatwe are given a subset S of vertices of G . We denote by G [ S ] the subgraph of G induced by S .We also denote S = V \ S , and G − S = G [ S ].A cut in G is a partition ( A, B ) of its vertices, where
A, B = ∅ . We sometimes also call a subset S of vertices of G with S = ∅ , V a cut, referring to the corresponding cut ( S, S ). The size of acut S is δ G ( S ) = | E G ( S, S ) | . The two central cut-related notions that we use in this paper are conductance and sparsity.Intuitively, both these notions measure how much a given cut “expands”, though they do itsomewhat differently. Formally, the conductance of a cut S is: Φ G ( S ) = δ G ( S )min { Vol G ( S ) , Vol G ( S ) } .11ntuitively, if Vol G ( S ) ≤ Vol( G ) /
2, then Φ G ( S ) is the fraction of the edges incident to verticesof S that have their other endpoint outside S . The conductance of a graph G , that we denoteby Φ( G ), is the smallest conductance of any cut in G : Φ( G ) = min ∅6 = S ( V Φ G ( S ). The sparsityof a cut S is: Ψ G ( S ) = δ G ( S )min {| S | , | S |} , and the expansion of a graph G is Ψ( G ) = min ∅6 = S ( V Ψ G ( S ).The following claim establishes a basic connection between a conductance and a sparsity of acut. Claim 2.1.
Let G = ( V, E ) be a connected graph with maximum vertex degree ∆ , and let S ( V be any cut in G . Then: Ψ G ( S )∆ ≤ Φ G ( S ) ≤ Ψ G ( S ) . The proof immediately follows from the fact that, for every set X of vertices of G , | X | ≤ Vol G ( X ) ≤ ∆ · | X | . We use the following definition of expanders.
Definition 2.2.
We say that a graph G is a ψ -expander iff Ψ( G ) ≥ ψ .We will sometimes informally say that graph G is an expander if Ψ( G ) ≥ /n o (1) . We use thefollowing simple observation multiple times. Observation 2.3.
Let G = ( V, E ) be an n -vertex graph that is a ψ -expander, and let G ′ beanother graph that is obtained from G by adding to it a new set V ′ of at most n vertices, and amatching M , connecting every vertex of V ′ to a distinct vertex of G . Then G ′ is a ψ/ -expander. We also use the following theorem that provides a fast algorithm for an explicit construction ofan expander, that is based on the results of Margulis [Mar73] and Gabber and Galil [GG81].
Theorem 2.4.
There is a constant α > and a deterministic algorithm, that we call Con-structExpander , that, given an integer n > , in time O ( n ) constructs a graph H n with | V ( H n ) | = n , such that H n is an α -expander, and every vertex in H n has degree at most .Proof. We assume that n ≥
10, as otherwise the graph H n with the required properties can beconstructed in constant time. We use the expander construction of Margulis [Mar73] and Gabberand Galil [GG81]. For an integer k >
1, let H ′ k be a graph whose vertex set is set Z k × Z k where Z k = Z /k Z . Each vertex ( x, y ) ∈ Z k × Z k has exactly eight adjacent edges, connecting itto the vertices ( x ± y, y ) , ( x ± (2 y + 1) , y ) , ( x, y ± x ), and ( x, y ± (2 x + 1)). Gabber and Galil[GG81] showed that Ψ( H ′ k ) = Ω(1).Given a parameter n ≥
10, we let k be the unique integer with ( k − < n ≤ k , and let n ′ = n − ( k − . Clearly, n ′ ≤ k − ( k − ≤ k < ( k − . In order to obtain the graph H n ,we start with the graph H ( k − , whose vertex set we denote by V ′ , and then add a set V ′′ of n ′ isolated vertices to this graph. Lastly, we add an arbitrary matching, connecting every vertexof V ′′ to a distinct vertex of V ′ , obtaining the final graph H n . It is immediate to verify that | V ( H n ) | = n , and that every vertex in H has degree at most 9. Moreover, from Observation 2.3,graph H n is an Ω(1)-expander. 12 .3 The Cut-Matching Game The cut-matching game was introduced by Khandekar, Rao, and Vazirani [KRV09] as part oftheir fast randomized algorithm for the Sparsest Cut and Balanced Cut problems. We use avariation of this game, due to Khandekar et al. [KKOV07], that we slightly modify to fit ourframework. The game involves two players - the cut player , who wants to construct an expanderfast, and the matching player , who wants to delay the construction of the expander. Initially, thegame starts with a graph H that contains an even number n of vertices an no edges. The gameis played in iterations, where in every iteration i , some set M i of edges is added to the currentgraph H . The i th iteration is played as follows. The cut player computes a partition ( A i , B i ) of V ( H ) with | A i | , | B i | ≥ n/ | E H ( A i , B i ) | ≤ n/ | A i | ≤ | B i | . The matching player computes any partition ( A ′ i , B ′ i ) of V ( H ) with | A ′ i | = | B ′ i | ,such that A i ⊆ A ′ i , and then computes an arbitrary perfect matching M i between A ′ i and B ′ i .The edges of M i are then added to the graph H . The algorithm terminates when graph H no longer contains a partition ( A, B ) of V ( H ) with | A | , | B | ≥ n/ | E H ( A, B ) | ≤ n/ H contains a large subgraphthat is an expander. Alternatively, it is easy to turn H into an expander by adding one last setof O ( n ) edges to it. We note that the graph H is a multi-graph, that is, it may contain paralleledges. The following theorem follows from the result of [KKOV07] (since we slightly modifytheir setting, we include the proof in Appendix for completeness). Theorem 2.5.
There is a constant c CMG , such that the algorithm described above terminatesafter at most c CMG log n iterations. We will use this cut-matching game together with algorithm
CutOrCertify from Theorem 1.6,that will be used in order to implement the cut player. The matching player will be implementedby a different algorithm, that we discuss in the following section. Note that, as long as thealgorithm from Theorem 1.6 produces a cut (
A, B ) of H with the required properties, we canuse the output of this algorithm as the response of the cut player. Theorem 2.5 guaranteesthat, after at most O (log n ) iterations of the game, the algorithm from Theorem 1.6 will returna subset S ⊆ V ( H ) of at least n/ H [ S ] is an expander. Once thishappens, we will terminate the cut-matching game. We use the following theorem from [SW19].
Theorem 2.6 (Restatement of Theorem 1.3 from [SW19]) . There is a deterministic algorithm,that, given a graph G = ( V, E ) of conductance Φ( G ) = φ , for some < φ ≤ , and a collection E ′ ⊆ E of k ≤ φ | E | / edges of G , computes a subgraph G ′ ⊆ G \ E ′ , that has conductance Φ( G ′ ) ≥ φ/ . Moreover, if we denote A = V ( G ′ ) and B = V ( G ) \ A , then | E G ( A, B ) | ≤ k , and Vol G ( B ) ≤ k/φ . The total running time of the algorithm is ˜ O ( | E | /φ ) . We note that [SW19] provide a significantly stronger result, where the edges of E ′ arrive in anonline fashion and the graph G ′ is maintained after each edge arrival. Additionally, the runningtime of the algorithm is ˜ O ( k/φ ) if the algorithm is given an access to the adjacency list of G .However, the weaker statement above is cleaner and it is sufficient for our purposes.13 .5 Embeddings of Graphs and Expansion Next, we define embeddings of graphs, that will be later used to certify graph expansion.
Definition 2.7.
Let G , H be two graphs with V ( G ) = V ( H ). An embedding of H into G is acollection P = { P ( e ) | e ∈ E ( H ) } of paths in G , such that for each edge e ∈ E ( H ), path P ( e )connects the endpoints of e in G . We say that the embedding causes congestion η iff every edge e ′ ∈ E ( G ) participates in at most η paths in P .Next we show that, if G and H are any two graphs with | V ( G ) | = | V ( H ) | , and H is a ψ -expanderthat embeds into G with a small congestion, then G is also an expander, for an appropriatelychosen expansion parameter. We note that this observation was used in a number of previousalgorithms in order to certify that a given graph is an expander; see, e.g. [LR99, ARV09, KRV09,KKOV07, AHK10, She09]. Lemma 2.8.
Let G , H be two graphs with V ( G ) = V ( H ) , such that H is a ψ -expander, forsome < ψ < . Assume that there exists an embedding P = { P ( e ) | e ∈ E ( H ) } of H into G with congestion at most η , for some η ≥ . Then G is a ψ ′ -expander, for ψ ′ = ψ/η .Proof. Consider any partition (
A, B ) of V ( G ), and assume that | A | ≤ | B | . Consider the corre-sponding cut ( A, B ) in H , and let E ′ = E H ( A, B ). Since H is a ψ -expander, | E ′ | ≥ ψ | A | . Notethat for every edge e ∈ E ′ , its corresponding path P ( e ) in G must contain an edge of E G ( A, B ).Since the paths in P cause congestion at most η , we get that | E G ( A, B ) | ≥ | E H ( A,B ) | η ≥ ψ | A | η . In general, when using the cut-matching game, one can usually either embed an expander intoa given graph G , or compute a sparse cut S in G . Unfortunately, it is possible that | S | is quitesmall in the latter case. Since each execution of the cut-matching game algorithm takes at leastΩ( | E ( G ) | ) time, we cannot afford to iteratively remove such small sparse cuts from G , if our goalis to either embed a large expander or to compute a balanced sparse cut in G in almost-lineartime. In order to overcome this difficulty, we use fake edges (that were also used in [KRV09]),together with the expander pruning algorithm from Theorem 2.6.Specifically, suppose we are given any graph G = ( V, E ), and let F be a collection of edges whoseendpoints lie in V , but the edges of F do not necessarily belong to G . We denote by G + F the graph obtained by adding the edges of F to G . If an edge e lies both in E and F , then weadd a new parallel copy of this edge. We note that F is allowed to be a multi-set, in which casemultiple parallel copies of an edge may be added to G .We show that, if H is an expander graph, and we embed it into a graph G + F with a smallcollection F of fake edges, then we can efficiently compute a large subgraph of G that is anexpander. Lemma 2.9.
Let G be an n -vertex graph, and let H be another graph with V ( H ) = V ( G ) , withmaximum vertex degree ∆ H , such that H is a ψ -expander, for some < ψ < . Let F be anyset of k fake edges for G , and let ∆ G be the maximum vertex degree in G + F . Assume thatthere exists an embedding P = { P ( e ) | e ∈ E ( H ) } of H into G + F , that causes congestion at ost η , for some η ≥ . Assume further that k ≤ ψn G η . Then there is a subgraph G ′ ⊆ G thatis a ψ ′ -expander, for ψ ′ ≥ ψ G · η , such that, if we denote by A = V ( G ′ ) and B = V ( G ) \ A ,then | A | ≥ n − kηψ and | E G ( A, B ) | ≤ k . Moreover, there is a deterministic algorithm, thatwe call ExtractExpander , that, given
G, H, P and F , computes such a graph G ′ in time ˜ O ( | E ( G ) | ∆ G · η/ψ ) .Proof. For convenience, we denote ˆ G = G + F . From Lemma 2.8, graph ˆ G is a ˆ ψ -expander, forˆ ψ = ψ/η . Moreover, from Claim 2.1:Φ( ˆ G ) ≥ Ψ( ˆ G )∆ G ≥ ψ ∆ G · η . In the remainder of the proof, we apply Theorem 2.6 to graph ˆ G and the set F of edges. Recallthat the set F of fake edges has cardinality k ≤ ψn G · η ≤ n · Φ( ˆ G )10 ≤ | E ( ˆ G ) |· Φ( ˆ G )10 . Therefore, wecan use Theorem 2.6 to obtain a subgraph G ′ ⊆ ( ˆ G \ F ) ⊆ G , that has conductance at least Φ( ˆ G )6 ≥ ψ G · η . Denoting A = V ( G ′ ) and B = V ( ˆ G ) \ V ( G ′ ) = V ( G ) \ V ( G ′ ), Theorem 2.6guarantees that | E G ( A, B ) | ≤ | E ˆ G ( A, B ) | ≤ k . From Claim 2.1, Ψ( G ′ ) ≥ Φ( G ′ ), and so graph G ′ is a ψ ′ -expander, for ψ ′ = ψ G η . The running time of the algorithm is ˜ O ( | E ( ˆ G ) | / Φ( ˆ G )) =˜ O ( | E ( G ) | ∆ G η/ψ ). It remains to show that | A | is sufficiently large.Recall that Theorem 2.6 guarantees that | E ˆ G ( A, B ) | ≤ k , while Vol ˆ G ( B ) ≤ k Φ( ˆ G ) ≤ k ∆ G ηψ . Inparticular, | B | ≤ k ∆ G ηψ ≤ n , since k ≤ ψn G η . Since graph ˆ G is a ˆ ψ -expander, and | E ˆ G ( A, B ) | ≤ k , we conclude that | B | ≤ | E ˆ G ( A,B ) | ˆ ψ ≤ k ˆ ψ ≤ kηψ , and so | A | ≥ n − kηψ . The goal of this section is to design an algorithm that will be used by the matching player. Weuse the following definition for routing pairs of vertex subsets.
Definition 3.1.
Assume that we are given a graph G = ( V, E ), and disjoint subsets A , B , A , B , . . . , A k , B k of its vertices, that we refer to as terminals . Assume further that for each 1 ≤ i ≤ k , | A i | ≤ | B i | ;we denote | A i | = n i . A partial routing of the sets A , B , . . . , A k , B k consists of: • A set M = S ki =1 M i ⊆ V × V of pairs of vertices, where for each 1 ≤ i ≤ k , M i is a matchingbetween vertices of A i and vertices of B i (we emphasize that the pairs ( u, v ) ∈ M i do notnecessarily correspond to edges of G ); and • For every pair ( u, v ) ∈ M of vertices, a path P ( u, v ) connecting u to v in G .We denote the resulting routing by P = { P ( u, v ) | ( u, v ) ∈ M } (note that the matching M isimplicitly defined by P ). We say that the routing P causes congestion η , if every edge in G belongs to at most η paths in P . The value of the routing is P ki =1 | M i | .15e are now ready to state the main result of this section, which is an algorithm that will beused by the Matching Player. We note that the theorem is a generalization of a similar resultthat was proved in [CK19], for the special case where k = 1. Theorem 3.2.
There is a deterministic algorithm, that, given an n -vertex graph G = ( V, E ) with maximum vertex degree ∆ , disjoint subsets A , B , . . . , A k , B k of its vertices, where for all ≤ i ≤ k , | A i | ≤ | B i | and | A i | = n i , and integers z ≥ , ℓ ≥
32∆ log n , computes one of thefollowing: • either a partial routing of the sets A , B , . . . , A k , B k , of value at least P i n i − z , thatcauses congestion at most ℓ ; or • a cut ( X, Y ) in G , with | X | , | Y | ≥ z/ , and Ψ G ( X, Y ) ≤
72∆ log n/ℓ .The running time of the algorithm is ˜ O ( ℓ k | E ( G ) | + ℓ kn ) . (We note that the parameter ℓ in the above theorem bounds the lengths of the paths in P , thatis, we will ensure that every path in P contains at most ℓ edges; however, since our algorithmdoes not rely on this fact, this is immaterial). Proof.
The proof of the theorem immediately follows from the following lemma.
Lemma 3.3.
There is a deterministic algorithm, that, given an n -vertex graph G = ( V, E ) with maximum vertex degree ∆ , disjoint subsets A ′ , B ′ , . . . , A ′ k , B ′ k of its vertices, where for all ≤ i ≤ k , | A ′ i | ≤ | B ′ i | , and | A ′ i | = n ′ i , and an integer ℓ ≥
32∆ log n , computes one of thefollowing: • either a partial routing of the sets A ′ , B ′ , . . . , A ′ k , B ′ k in G , of value at least (cid:16)P ki =1 n ′ i (cid:17) · nℓ and congestion ; or • a cut ( X, Y ) in G , with | X | , | Y | ≥ (cid:16)P ki =1 n ′ i (cid:17) / , and Ψ G ( X, Y ) ≤
72∆ log n/ℓ .The running time of the algorithm is ˜ O ( kℓ | E ( G ) | + kn ) . Before we prove the lemma, we complete the proof of Theorem 3.2 using it. Throughout thealgorithm, we maintain the matchings M , . . . , M k , where M i is a matching between vertices of A i and vertices of B i , and a routing P = { P ( u, v ) | ( u, v ) ∈ S i M i } . Initially, we set M i = ∅ for all i , and P = ∅ . We then iterate. In every iteration, for each 1 ≤ i ≤ k , we let A ′ i ⊆ A i and B ′ i ⊆ B i be the subsets of vertices that do not participate in the matching M i , and we denote n ′ i = | A ′ i | ;since | A i | ≤ | B i | , we are guaranteed that | A ′ i | ≤ | B ′ i | . We also denote N ′ = P i n ′ i . If N ′ ≤ z ,then we terminate the algorithm, and return the current matchings M , . . . , M k , together withtheir routing P . Otherwise, we apply Lemma 3.3 to graph G and vertex sets A ′ , B ′ , . . . , A ′ k , B ′ k .If the outcome is a cut ( X, Y ) in G , with | X | , | Y | ≥ N ′ /
2, and Ψ G ( X, Y ) ≤
72∆ log n/ℓ , thenwe terminate the algorithm, and return the cut (
X, Y ). Notice that, since N ′ > z holds, weare guaranteed that | X | , | Y | ≥ z/
2, as required. Therefore, we assume from now on that,whenever Lemma 3.3 is called, it returns a partial routing (( M ′ , . . . , M ′ k ) , P ′ ) of the vertex sets16 ′ , B ′ , . . . , A ′ k , B ′ k , of value at least N ′ log nℓ , that causes congestion 1. We then add the pathsin P ′ to P , and for each 1 ≤ i ≤ k , we add the matching M ′ i to M i , and continue to the nextiteration.The key in the analysis of the algorithm is to bound the number of iterations. For all j ≥ N ′ j denote the parameter N ′ at the beginning of iteration j . Then, since Lemma 3.3 returnsa routing of value at least N ′ j log nℓ , we get that N ′ j +1 ≤ N j (1 − n/ℓ ). Therefore, after ℓ iterations, parameter N ′ j is guaranteed to fall below z , and the algorithm will terminate. Noticethat the congestion of the final routing P is bounded by the number of iterations, ℓ . Moreover,since the running time of each iteration is ˜ O ( kℓ | E ( G ) | + kn ), the total running time of thealgorithm is ˜ O ( kℓ | E ( G ) | + knℓ ). In order to complete the proof of Theorem 3.2, it is nowenough to prove Lemma 3.3. Proof of Lemma 3.3.
Our algorithm is very similar to that employed in [CK19], and consistsof two phases. In the first phase, we employ a simple greedy algorithm that attempts to computea partial routing of sets A ′ , B ′ , . . . , A ′ k , B ′ k . If the resulting routing contains enough paths thenwe terminate the algorithm and return this routing. Otherwise, we proceed to the second phase,where we compute the desired cut. Phase 1: Route.
We use a simple greedy algorithm. Initially, we set, for all 1 ≤ i ≤ k , M i = ∅ , and we set P = ∅ . The algorithm then iterates, as long as there is a path P in G of length at most ℓ , that, for some 1 ≤ i ≤ k , connects some vertex v ∈ A ′ i to some vertex u ∈ B ′ i . The algorithm computes any such path P , adds ( u, v ) to M i , and adds the path P to P , denoting P = P ( u, v ). We then delete every edge of P from G , and we delete u from A ′ i and v from B ′ i , and then continue to the next iteration. The algorithm terminates when, for each1 ≤ i ≤ k , every path in the remaining graph G connecting a vertex of A ′ i to a vertex of B ′ i has length greater than ℓ (or A ′ i = ∅ ). It is easy to verify that, for each 1 ≤ i ≤ k , the finalset M i is a matching between vertices of A ′ i and vertices of B ′ i , and that P is a collection ofedge-disjoint paths, of length at most ℓ each, containing, for every pair ( u, v ) ∈ S i M i , a path P ( u, v ) connecting u to v in G . If P i | M i | ≥ (cid:16)P ki =1 n ′ i (cid:17) nℓ , then we terminate the algorithm,obtaining the desired partial routing. Otherwise, we continue to the second phase, where a cut( X, Y ) will be computed.We implement the algorithm for the first phase by using Even-Shiloach trees.
Lemma 3.4 ([ES81, Din06]) . There is a deterministic data structure, called ES-tree, that, givenan unweighted undirected n -vertex graph G undergoing edge deletions, a root node s , and a depthparameter ℓ , maintains, for every vertex v ∈ V ( G ) a value δ ( s, v ) such that δ ( s, v ) = dist G ( s, v ) if dist G ( s, v ) ≤ ℓ and δ ( s, v ) = ∞ otherwise (here, dist G ( s, v ) is the distance between s and v in the current graph G ). The data structure supports shortest-paths queries: given a vertex v ,return a shortest path connecting s to v in G , if dist G ( s, v ) ≤ ℓ , and return ∞ otherwise. Thetotal update time of the data structure is ˜ O ( | E ( G ) | ℓ + n ) , and time needed to process each queryis O ( | P | ) , where P is the path returned in response to the query. We construct k graphs G , . . . , G k , where graph G i is obtained from a copy of G , by adding asource vertex s i that connects to every vertex in A ′ i with an edge, and a destination vertex t i ,that connects to every vertex in B ′ i with an edge. For each 1 ≤ i ≤ k , we then maintain an17S-tree in graph G i , from source s i , up to depth ℓ + 2. Note that the total update time neededin order to maintain all these ES-trees under edge deletions is ˜ O ( ℓk | E ( G ) | + kn ). Our algorithmprocesses the graphs G i one-by-one. When graph G i is processed, we perform a number ofiterations, as long as dist G i ( s i , t i ) ≤ ℓ + 2. In each such iteration, we perform a shortest-pathquery in the corresponding ES-tree for vertex t i , obtaining a path P , of length at most ℓ + 2,connecting s i to t i . By discarding the first and the last edge on this path, we obtain a path P ′ of length at most ℓ , connecting some vertex v ∈ A ′ i to some vertex u ∈ B ′ i . We delete all edgeson path P ′ from all copies G , . . . , G k of the graph G , and we delete v and u from G i , updatingall corresponding ES-trees. Note that the total time to respond to all queries is O ( | E ( G ) | ), aswhenever a path P is returned, all its edges are deleted from all graphs G i . Therefore, the totalrunning time of the algorithm is ˜ O ( kℓ | E ( G ) | + kn ). Phase 2: Cut.
We use the following standard algorithm that follows the ball-growing paradigm.
Claim 3.5.
There is a deterministic algorithm, that, given an unweighted n ′ -vertex graph H ′ with maximum vertex degree at most ∆ , and two sets S, T of its vertices, such that every pathconnecting a vertex of S to a vertex of T in H ′ has length greater than ℓ , for some parameter ℓ > computes, in time O ( | E ( H ′ ) | ) , a cut Z in H ′ , such that: • | Z | ≤ n ′ / ; • either S ⊆ Z or T ⊆ Z hold; and • | E H ′ ( Z, V ( H ′ ) \ Z ) | <
8∆ log n ′ ℓ · | Z | .Proof. Let S = S , and for all j >
0, let S j contain all vertices of S j − , and all neighbors ofvertices of S j − in graph H ′ . We also define T = T , and for all j >
0, we let T j containall vertices of T j − , and all neighbors of vertices of T j − in graph H ′ . We need the followingstandard observation: Observation 3.6.
There is an index ≤ j < ⌈ ℓ/ ⌉ , such that either (i) | S j +1 | < n ′ / and | E H ′ ( S j , V ( H ′ ) \ S j ) | <
8∆ log n ′ ℓ · | S j | ; or (ii) | T j +1 | < n ′ / and | E H ′ ( T j , V ( H ′ ) \ S j ) | <
8∆ log n ′ ℓ ·| T j | .Proof. Assume for contradiction that the claim is false. Let j ′ be the smallest index, such that | S j ′ | > n ′ / | T j ′ | > n ′ /
2. Assume w.l.o.g. that | T j ′ | > n ′ / j ′ < ℓ/
2. Then for all 1 ≤ j ≤ ⌈ ℓ/ ⌉ , | S j | < n ′ / S to a vertex of T , of length at most ℓ ). However,from our assumption, for all 0 ≤ j < ⌈ ℓ/ ⌉ , | E H ′ ( S j , V ( H ′ ) \ S j ) | >
8∆ log n ′ ℓ · | S j | . Since themaximum vertex degree in H ′ is bounded by ∆, we get that | S j +1 \ S j | ≥ n ′ ℓ · | S j | , andso | S j +1 | ≥ | S j | (cid:16) n ′ ℓ (cid:17) . Overall, we get that | S ⌈ ℓ/ ⌉ | ≥ | S | · (cid:16) n ′ ℓ (cid:17) ⌈ ℓ/ ⌉ > n ′ , acontradiction.Assume now that j ′ ≥ ℓ/
2. Then we get that for all 1 ≤ j ≤ ⌈ ℓ/ ⌉ , | T j | < n ′ / T j , we conclude that | T ⌈ ℓ/ ⌉ | ≥ n ′ /
2, a contradic-tion. 18he algorithm performs two BFS searches in H ′ simultaneously, one starting from S and anotherstarting from T , until an index j with the properties guaranteed by Observation 3.6 is found. If | S j +1 | < n ′ / | E H ′ ( S j , V ( H ′ ) \ S j ) | <
8∆ log n ′ ℓ · | S j | , then we return Z = S j ; otherwise, andotherwise we return Z = T j .We are now ready to describe the algorithm for Phase 2. For convenience, we denote N = P ki =1 n ′ i . Recall that Phase 2 is only executed if the routing P computed in Phase 1 containsfewer than N log nℓ paths. Let E ′ be the set of all edges lying on the paths in P , so | E ′ | ≤ N log nℓ (as the length of every path in P is at most ℓ ), and let H = G \ E ′ . We also denote, for all1 ≤ i ≤ k , by A ′′ i ⊆ A ′ i the subset of all vertices of the original set A ′ i that do not participate inthe matching M i , and we define B ′′ i ⊆ B ′ i similarly. Notice that for all 1 ≤ i ≤ k , if A ′′ i , B ′′ i = ∅ ,then the length of the shortest path, connecting a vertex of A ′′ i to a vertex of B ′′ i is greater than ℓ .Our algorithm is iterative. We maintain a subgraph H ′ of H , that is initially set to be H . Inevery iteration i , we compute a subset U i ⊆ V ( H ′ ) of vertices of H ′ , such that | U i | ≤ | V ( H ′ ) | / | E H ′ ( U i , V ( H ′ ) \ U i ) | <
8∆ log nℓ · | U i | . We then delete, from graph H ′ , all vertices of U i , andcontinue to the next iteration. Throughout the algorithm, we may update the sets A ′′ j and B ′′ j ,by removing some vertices from them.The algorithm is executed as long as there is some index 1 ≤ j ≤ k , with A ′′ j , B ′′ j = ∅ , and aslong as | S i U i | ≤ n/
4; if either of these conditions do not hold, the algorithm is terminated. Wenow describe the i th iteration of the algorithm, and we let 1 ≤ j ≤ k be an index for which A ′′ j , B ′′ j = ∅ . We apply the algorithm from Claim 3.5 to the current graph H ′ , and the sets S = A ′′ j , T = B ′′ j of vertices; recall that every path connecting a vertex of A ′′ j to a vertex of B ′′ j in H ′ has length greater than ℓ . Let Z be the cut returned by the algorithm. We set U i = Z . Wealso denote by E i = E H ′ ( Z, V ( H ′ ) \ Z ). Recall that we are guaranteed that | E i | ≤
8∆ log nℓ · | U i | .Moreover, either A ′′ j ⊆ U i , or B ′′ j ⊆ U i . We update the current graph H ′ , by deleting the verticesof U i from it. For all 1 ≤ j ′ ≤ k , we delete from A ′′ j ′ and from B ′′ j ′ all vertices that lie in the set U i .Let q be the number of iterations in the algorithm; it is easy to see that q ≤ k . Therefore, therunning time of the algorithm in Phase 2 so far is O ( k ·| E ( H ) | ) = O ( k ·| E ( G ) | ). Let U = S ri =1 U i ,and let ˆ E = S ri =1 E i .If the algorithm terminated because | U | ≥ n/
4, then we are guaranteed that | U | ≥ N/
2, as N ≤ n/ ≤ j ≤ k , either A ′′ j = ∅ (and so A ′ j ⊆ U ), or B ′′ j = ∅ (and so B ′ j ⊆ U ). In the latter case, we get that: | U | ≥ k X j =1 n ′ j − |P| ≥ N − N log nℓ ≥ N/ , since we have assumed that ℓ ≥
32∆ log n . Moreover, it is immediate to verify that | ˆ E | ≤
8∆ log nℓ · | U | .Consider now the original graph H . We define a cut ( X, Y ) in H by setting X = U and Y = V ( H ) \ U . Since | E ( G ) \ E ( H ) | = | E ′ | ≤ N log nℓ ≤ | U | log nℓ , we get that | E G ( X, Y ) | ≤| ˆ E | + | E ′ | ≤
24∆ log nℓ · | X | . 19ext, we claim that | X | ≤ n/
4. Indeed, we are guaranteed that P q − i =1 | U i | ≤ n/
4, and so U q ≤ n − P q − i =1 | U i | . We then get that altogether, | X | = P qi =1 | U i | ≤ n + P q − i =1 | U i | ≤ n . Inparticular, | Y | ≥ n/ | Y | ≥ | X | /
3. Therefore, | E G ( X, Y ) | ≤
24∆ log nℓ · | X | ≤
72∆ log nℓ · min {| X | , | Y |} , and so Ψ G ( X, Y ) ≤
72∆ log nℓ . As observed already, | X | ≥ N/ P i n ′ i /
2, and | Y | ≥ n/ ≥ P i n ′ i /
2, as P i n ′ i ≤ n/ ℓ =144∆ log n/ψ . Corollary 3.7.
There is a deterministic algorithm, that we call
RouteOrCut , that, given an n -vertex graph G = ( V, E ) with maximum vertex degree ∆ , disjoint subsets A , B , . . . , A k , B k ofits vertices, where for all ≤ i ≤ k , | A i | ≤ | B i | and | A i | = n i , an integer z ≥ , and a parameter < ψ < / , computes one of the following: • either a partial routing of the sets A , B , . . . , A k , B k , of value at least P i n i − z , thatcauses congestion at most O (∆ log n/ψ ) ; or • a cut ( X, Y ) in G , with | X | , | Y | ≥ z/ , and Ψ G ( X, Y ) ≤ ψ .The running time of the algorithm is ˜ O (∆ k | E ( G ) | /ψ + k ∆ n/ψ ) . An Improved Algorithm for k = 1 For the special case where k = 1, we provide a somewhat faster algorithm, summarized inthe following theorem. We note that this algorithm is not essential for the proof of our mainresult (Theorem 1.2), but we can use it to provide a self-contained proof of the theorem with asomewhat slower running time, which we believe is of independent interest. Theorem 3.8.
There is a deterministic algorithm, that we call
RouteOrCut-1Pair , that,given a connected n -vertex m -edge graph G = ( V, E ) with maximum vertex degree ∆ , two disjointsubsets A , B of its vertices, where | A | ≤ | B | and | A | = n , an integer z ≥ , and a parameter < ψ < / , computes one of the following: • either a partial routing of the sets A , B , of value at least n − z , that causes congestionat most /ψ ; or • a cut ( X, Y ) in G , with | X | , | Y | ≥ z/ ∆ , and Ψ G ( X, Y ) ≤ ψ .The running time of the algorithm is O (cid:16) m ∆ log mψ (cid:17) .Proof. Theorem 3.8 is an easy application of either the bounded-height variant of the push-relabel-based algorithm of Henzinger, Rao and Wang [HRW17] for max-flow, or the bounded-height variant of the blocking-flow-based algorithms by Orrecchia and Zhu [OA14]. Both algorithms are designed to have local running time, that is, they may not read the whole graph. However,we do not need to use this property here.
20e start by introducing some basic notation. Suppose we are given an unweighted undirectedgraph G = ( V, E ). We let S : V → Z ≥ denote a source function and T : V → Z ≥ denote a sink function . For a vertex v ∈ V , we sometimes call T ( v ) its sink capacity . Intuitively, initially,for every vertex v ∈ V , we have S ( v ) units of mass (substance that needs to be routed) placedon vertex v . Additionally, every vertex v ∈ V may absorb up to T ( v ) units of mass. Our goalis to route the initial mass across the graph (using standard single-commodity flow) so that allmass is absorbed. We use a flow function f : V × V → R , that must satisfy: (i) for all u, v ∈ V , f ( u, v ) = − f ( v, u ); and (ii) if ( u, v ) E , then f ( u, v ) = 0. Whenever f ( u, v ) >
0, we interpretit as f ( u, v ) units of mass are sent via the edge ( u, v ) from u to v , while f ( u, v ) < P v ∈ V S ( v ) ≤ P u T ( u ), that is, the total amount of mass that needs to berouted is bounded by the total sink capacities of the vertices. Given a flow f : V × V → R ,the congestion of f is max e ∈ E | f ( e ) | . We say that f is a preflow if, for every vertex v ∈ V , P u ∈ V f ( v, u ) ≤ S ( v ); in other words, the net amount of mass routed away from any node v is bounded by the amount of the source mass S ( v ). For every vertex v ∈ V , we also denoteby f ( v ) = S ( v ) + P u ∈ V f ( u, v ) the amount of mass that remains at v after the routing f .We define the absorbed mass of a node v as ab f ( v ) = min { f ( v ) , T ( v ) } , and the excess of v asex f ( v ) = f ( v ) − ab f ( v ), measuring the amount of flow that remains at v and cannot be absorbedby it. Note that, if ex f ( v ) = 0 for every vertex v , then all the mass is successfully routed to thesinks. Let ex f ( V ) = P v ex f ( v ) denote the total amount of mass that is not absorbed by thesinks.The following lemma easily follows from Theorem 3.3 in [NSW17] (or Theorem 3.1 in [HRW17]). Lemma 3.9.
There is a deterministic algorithm, that, given an m -edge graph G = ( V, E ) , asource function S : V → Z ≥ , a sink function T : V → Z ≥ , and a parameter < φ ≤ , suchthat P v ∈ V S ( v ) ≤ P v ∈ V T ( v ) , and for every vertex v ∈ V , S ( v ) ≤ deg G ( v ) and T ( v ) ≤ deg G ( v ) ,computes, in time O (cid:16) m log mφ (cid:17) , an integral preflow f of congestion at most /φ . Moreover, ifthe total excess ex f ( V ) > , then the algorithm also computes a cut ( S, S ) with Φ G ( S ) < φ and Vol G ( S ) , Vol G ( S ) ≥ ex f ( V ) . We are now ready to complete the proof of Theorem 3.8. For convenience, we denote A by A , B by B , and n by N . For the input graph G = ( V, E ), we define a source function asfollows: for all v ∈ A , S ( v ) = 1, and for all other vertices, S ( v ) = 0. Similarly, we define thesink function to be T ( v ) = 1 if v ∈ B , and T ( v ) = 0 otherwise.We then apply the algorithm from Lemma 3.9 to graph G , source function S , sink function T and parameter φ = ψ/ ∆. Let f be the resulting preflow with congestion at most 4 /φ ≤ /ψ .The running time of the algorithm is O (cid:16) m log mφ (cid:17) = O (cid:16) m ∆ log mψ (cid:17) We now consider two cases. The first case happens when ex f ( V ) ≥ z . In this case, we obtaina cut ( X, Y ) with Φ G ( X, Y ) < φ and Vol G ( X ) , Vol G ( Y ) ≥ ex f ( V ) ≥ z . Since the maximumvertex degree in G is bounded by ∆, we get that | X | , | Y | ≥ z/ ∆. Moreover, from Claim 2.1,Ψ G ( X, Y ) ≤ ∆Φ G ( X, Y ) ≤ ∆ φ ≤ ψ .Consider now the second case, where ex f ( V ) < z . Let B ′ be a multi-set of vertices, where foreach vertex v ∈ V , we add ex f ( v ) copies of v into B ′ (since f is integral, so is ex f ( v ) for all21 ∈ V ). Then | B ′ | ≤ z , and f defines a valid integral flow from A to B ∪ B ′ , with congestion atmost 4∆ /ψ , such that all but at most z flow units terminate at distinct vertices of B . It nowremains to compute a decomposition of f into flow-paths, and then discard the flow-paths thatterminate at vertices of B ′ . This can be done by using, for example, the link-cut tree [ST83], orsimply a standard Depth-First Search. For the latter, construct a graph G ′ , obtained from G by creating | f ( e ) | parallel copies of every edge e ∈ E ( G ), that are directed along the directionof the flow f on e ; recall that | f ( e ) | ≤ /ψ . We also add a source s that connects to everyvertex of A with a directed edge. We then perform a DFS search of the resulting graph G ′ ,starting from s . If the DFS search leaves some vertex v without reaching any vertex of B ∪ B ′ ,then we delete v from the graph G ′ . If the search reaches a vertex v ∈ B ∪ B ′ , then we retracethe current path from s to v , adding it to the path-decomposition that we are constructing, anddeleting all edges on this path from G ′ . We then restart the DFS search. It is easy to verify thatevery edge is traversed at most twice throughout this procedure, and so the total running timeis O | E ( G ′ ) | = O ( | E ( G ) | · ∆ /ψ ). Let P be the final collection of paths that we obtain. Thenevery vertex of A has exactly one path in P originating from it, and all but at most z pathsin P terminate at distinct vertices of B . We discard from P all paths that do not terminateat vertices of B , obtaining the desired final collection of paths. The total running time of thealgorithm is O (cid:16) m ∆ log mψ (cid:17) . The goal of this section is to prove Theorem 1.6. We do so using the following theorem, thatcan be thought of as a restatement of Theorem 1.6 in a way that will be more convenient towork with in our inductive proof. Recall that c CMG is the constant from Theorem 2.5.
Theorem 4.1.
There are universal constants c , N and a deterministic algorithm, that, givenan n -vertex graph G = ( V, E ) and parameters N, q with
N > N an integral power of , and q ≥ an integer, such that n ≤ N q , and the maximum vertex degree in G is at most c CMG log n ,computes one of the following: • either a cut ( A, B ) in G with | A | , | B | ≥ n/ and | E G ( A, B ) | ≤ n/ ; or • a subset S ⊆ V of at least n/ vertices, such that Ψ( G [ S ]) ≥ / ( q log N ) q .The running time of the algorithm is O (cid:16) N q +1 · ( q log N ) c q (cid:17) . We first show that Theorem 1.6 follows from Theorem 4.1. The parameter N in Theorem 1.6remains the same as that in Theorem 4.1. Assume that we are given an n -vertex graph and aparameter r , such that n /r ≥ N . We set q = r , and we let N be the smallest integral powerof 2 such that N ≥ n /q ; observe that ( N/ q ≤ n ≤ N q and N ≥ N hold. Moreover, since q log( N/ ≤ log n , if N is a large enough constant, then q log N ≤ n .We apply the algorithm from Theorem 4.1 to graph G with the parameter q . If the outcome isa cut ( A, B ) with | A | , | B | ≥ n/ | E ( A, B ) | ≤ n/ S ⊆ V of at least n/ G [ S ]) ≥ / ( q log N ) q ≥ / (2 log n ) q ≥ Ω (cid:16) / (log n ) O ( r ) (cid:17) , as required. Lastly, the runningtime of the algorithm is O (cid:16) N q +1 · ( q log N ) c q (cid:17) = O (cid:16) n O (1 /r ) · (log n ) O ( r ) (cid:17) .The remainder of this section is dedicated to proving Theorem 4.1. The proof is by inductionon the parameter q . We start with the base case where q = 1 and then show the step for q > q = 1 The algorithm uses the following key theorem.
Theorem 4.2.
There is a deterministic algorithm that, given as input a graph G ′ = ( V ′ , E ′ ) with | V ′ | = n ′ and maximum vertex degree ∆ = O (log n ′ ) , in time ˜ O (( n ′ ) ) returns one of thefollowing: • either a subset S ⊆ V ′ of at least n ′ / vertices such that G ′ [ S ] is an Ω(1 / log n ′ ) -expander;or • a cut ( X, Y ) in G ′ with | X | , | Y | ≥ Ω( n ′ / log n ′ ) and Ψ G ′ ( X, Y ) ≤ / . We prove Theorem 4.2 below, after we complete the proof of Theorem 1.6 for the case where q = 1 using it. Our algorithm performs a number of iterations. We maintain a subgraph G ′ ⊆ G ;at the beginning of the algorithm, G ′ = G . In the i th iteration, we compute a subset S i ⊆ V ( G ′ )of vertices, and then update the graph G ′ by deleting the vertices of S i from it. The iterationsare performed as long as | S i S i | < n/ i th iteration, we consider the current graph G ′ , denoting | V ( G ′ ) | = n ′ .Note that, since we assume that | S i ′
4, we get that n ′ ≥ n/
4. We then applyTheorem 4.2 to graph G ′ . If the outcome is a subset S ⊆ V ′ of at least 2 n ′ / G ′ [ S ] is an Ω(1 / log n ′ )-expander, then we terminate the algorithm and return S ; in thiscase we say that the iteration terminated with an expander. Notice that, since n ′ ≥ n/
4, and | S | ≥ n ′ /
3, we are guaranteed that | S | ≥ n/
2. Moreover, assuming that N is a large enoughconstant, the expansion of G [ S ] is at least Ω(1 / log n ′ ) ≥ / log n ≥ / log N as required.Otherwise, we obtain a cut ( X, Y ) in G ′ with | X | , | Y | ≥ Ω( n ′ / log n ′ ) and Ψ G ′ ( X, Y ) ≤ / | X | ≤ | Y | .We then set S i = X , update the graph G ′ by removing the vertices of S i from it, and continueto the next iteration. If the algorithm does not terminate with an 1 / log N -expander, thenit terminates once | S i ′ S i ′ | ≥ n/ i denote the number of iterations in this case.Since we are guaranteed that | S i ′
4, while | S i | ≤ n ′ / ≤ n/
8, we get that n/ ≤| S ii ′ =1 S i ′ | ≤ n/
8. Let A = S ii ′ =1 S i ′ , and let B = V ( G ) \ A . From the above discussion,we are guaranteed that | A | , | B | ≥ n/
4, and moreover, since the cut S i ′ that we obtain inevery iteration i ′ has sparsity at most 1 /
100 in its current graph G ′ , it is easy to verify that | E G ( A, B ) | ≤ | A | / ≤ n/ A, B ) as the algorithm’s outcome.Since for all 1 ≤ i ′ ≤ i , | S i ′ | ≥ Ω( n/ log n ), the number of iterations is bounded by O (log n ),and so the total running time of the algorithm is ˜ O ( n ) = O ( N log c n ), if c is a large enoughconstant. In the remainder of this subsection we focus on the proof of Theorem 4.2.23 roof of Theorem 4.2 As our first step, we use Algorithm
ConstructExpander from Theorem 2.4 to construct a ψ ∗ -expander H = H n ′ on n ′ vertices, with ψ ∗ = Ψ( H ) = Ω(1), such that maximum vertexdegree in H is at most 9. We identify the vertices of H with the vertices of G ′ , so V ( H ) = V ′ .The running time of this step is O ( n ′ ). Using a simple greedy algorithm, and the fact thatthe maximum vertex degree in H is at most 9, we can partition the set E ( H ) of edges into 17matchings, M , . . . , M . We then perform up to 17 iterations; in each iteration i , we will eitherembed the edges of M i into G ′ , after possibly adding a small number of fake edges to it, or wewill compute the desired cut ( A, B ) in G ′ .The i th iteration is executed as follows. We denote M i = { e , . . . , e k i } , where the edges areindexed in an arbitrary order. For each 1 ≤ j ≤ k i , denote e j = ( u j , v j ). We define twocorresponding sets A j , B j of vertices of G ′ , where A j = { u j } and B j = { v j } . We then applyAlgorithm RouteOrCut from Corollary 3.7 to graph G ′ , the sets A , B , . . . , A k i , B k i of itsvertices, integer z = l n ′ c log n ′ m for some large enough constant c , and parameter ψ = 1 / O ( k i | E ( G ′ ) | ∆ /ψ + k i n ′ / ∆ ) = ˜ O (( n ′ ) ). Wenow consider two cases. If the algorithm returns a cut ( X, Y ), with Ψ G ′ ( X, Y ) ≤ ψ , then weterminate the algorithm and return this cut; in this case, | X | , | Y | ≥ z/ ≥ Ω( n ′ / log n ′ ) musthold. Otherwise, the algorithm computes a partial routing P of the sets A , B , . . . , A k i , B k i , ofvalue at least k i − z , that causes congestion at most O (∆ log n ′ /ψ ) = O (log n ′ ). Let M ′ i ⊆ M i be the subset of edges that are routed in P , so for every edge e j ∈ M ′ i there is a path P ( e j ) ∈ P connecting its endpoints. Let M ′′ i ⊆ M i denote the set of the remaining edges, so | M ′′ i | ≤ z . Welet F i = M ′′ i be a set of fake edges in graph G ′ , that we use in order to route the edges of M ′′ i .For each edge e j ∈ M ′′ i , we let P ( e j ) be the path consisting of the new fake copy of e j in G ′ . Let P i = P ∪ { P ( e j ) | e j ∈ M ′′ i } . We now obtained an embedding of the edges of M i into G ′ + F i ,with congestion O (log n ′ ).If the algorithm never terminates with the cut ( X, Y ) with Ψ G ′ ( X, Y ) ≤ ψ , then, after 17iterations, we obtain an embedding P ∗ = S i =1 P i of H into G + F , where F = S i =1 F i is a setof at most 17 z fake edges; the congestion of the embedding is η = O (log n ′ ). Moreover, if wedenote by ∆ G the maximum vertex degree in G + F , then ∆ G ≤
17 + ∆ ≤ ExtractExpander from Lemma 2.9 to graphs G ′ , H , the set F of fake edges, andthe embedding P ∗ of H into G . Since ψ ∗ n ′ G η ≥ ψ ∗ n ′ O (∆ log n ′ ) ≥ n ′ O (log n ′ ) , by letting the constant c used in the definition of z be large enough, we ensure that | F | ≤ z ≤ ψ ∗ n ′ G η , as required. Thealgorithm from Lemma 2.9 then computes a subgraph G ′′ ⊆ G ′ that is a ψ ′ -expander, where ψ ′ ≥ ψ ∗ G η = Ω (cid:16) n ′ (cid:17) , with: V ( G ′′ ) ≥ n ′ − · zηψ ∗ = n ′ − O ( z log n ′ )By letting c be a large enough constant, we can ensure that | V ( G ′′ ) | ≥ n ′ /
3. The running timeof Algorithm
ExtractExpander from Lemma 2.9 is ˜ O ( | E ( G ′ ) | ∆ G η/ψ ∗ ) = ˜ O ( n ′ ), and so thetotal running time of the algorithm is ˜ O (( n ′ ) ).24 .2 Step: q > Suppose we are given an integer q >
1. We assume that Theorem 4.1 holds for q −
1: that is,there is a deterministic algorithm, that we denote by A ( q − n -vertex graph G with maximum vertex degree at most c CMG log n and n ≤ N q − , for some N > N , either returnsa cut ( A, B ) in G with | A | , | B | ≥ n/ | E ( A, B ) | ≤ n/ S ⊆ V ( G )of at least n/ G [ S ]) ≥ ψ q − , where ψ q − = 1 / (cid:16) ( q −
1) log N ) q − (cid:17) .We denote the running time of this algorithm by T ( q −
1) = O (cid:16) N q · (( q −
1) log N ) c ( q − (cid:17) .Throughout the proof, we also denote ψ q = 1 / (cid:0) q log N ) q (cid:1) We now prove that the theorem holds for the given value of q , by invoking Algorithm A ( q − Theorem 4.3.
There is a deterministic algorithm that, given as input an n ′ -vertex graph G ′ =( V ′ , E ′ ) and integers N, q with
N > N an integral power of and q > , such that N q − / ≤ n ′ ≤ N q , and maximum vertex degree of G ′ is ∆ = O (log n ′ ) , returns one of the following: • either a subset S ⊆ V ′ of at least n ′ / vertices such that G ′ [ S ] is a ψ q -expander; or • a cut ( X, Y ) in G ′ with | X | , | Y | ≥ Ω (cid:16) ψ q − · n ′ log n ′ (cid:17) and Ψ G ′ ( X, Y ) ≤ / .The running time of the algorithm is O (cid:16) N q +1 · ( q log N ) q + O (1) (cid:17) + O ( N · log n ′ ) · T ( q − . We prove Theorem 4.3 below, after we complete the proof of Theorem 1.6 for the current valueof q using it.Note that we can assume that n > N q − , since otherwise we can use algorithm A ( q − A, B ) in G with | A | , | B | ≥ n/ | E ( A, B ) | ≤ n/ S ⊆ V ( G ) of at least n/ G [ S ]) ≥ ψ q − ≥ ψ q , in time T ( q −
1) = O (cid:16) N q · (( q −
1) log N ) c ( q − (cid:17) .Our algorithm performs a number of iterations. We maintain a subgraph G ′ ⊆ G ; at thebeginning of the algorithm, G ′ = G . In the i th iteration, we compute a subset S i ⊆ V ( G ′ ) ofvertices, and then update the graph G ′ by deleting the vertices of S i from it. The iterations areperformed as long as | S i S i | < n/ i th iteration, we consider the current graph G ′ , denoting | V ( G ′ ) | = n ′ .Note that, since we assume that | S i ′
4, we get that n ′ ≥ n/
4, and in particular N q − / ≤ n ′ ≤ N q . We then apply Theorem 4.3 to graph G ′ . If the outcome is a subset S ⊆ V ′ of at least 2 n ′ / G ′ [ S ] is a ψ q -expander, then we terminate the algorithm andreturn S . Notice that, since n ′ ≥ n/
4, and | S | ≥ n ′ /
3, we are guaranteed that | S | ≥ n/ X, Y ) in G ′ with | X | , | Y | ≥ Ω (cid:16) ψ q − · n ′ log n ′ (cid:17) and Ψ G ′ ( X, Y ) ≤ / | X | ≤ | Y | . We then set S i = X , update the graph G ′ by removing thevertices of S i from it, and continue to the next iteration. If the algorithm does not terminatewith a ψ q -expander, then it terminates once | S i ′ S i ′ | ≥ n/ i denote the number ofiterations in this case. Since we are guaranteed that | S i ′
4, while | S i | ≤ n ′ / ≤ n ′ / n/ ≤ | S ii ′ =1 S i ′ | ≤ n/
8. Let A = S ii ′ =1 S i ′ , and let B = V ( G ) \ A . From the25bove discussion, we are guaranteed that | A | , | B | ≥ n/
4, and moreover, since the cut S i ′ thatwe obtain in every iteration i ′ has sparsity at most 1 /
100 in its current graph G ′ , it is easy toverify that | E G ( A, B ) | ≤ | A | / ≤ n/ A, B ) as the algorithm’soutcome.Notice that the number of iterations in the algorithm is bounded by: O (log n/ψ q − ) = O (cid:16) ( q log N ) q − · log n (cid:17) ≤ O (cid:16) ( q log N ) q +1 (cid:17) , since n ≤ N q . Therefore, the total running time of the algorithm is at most: O (cid:16) N q +1 · ( q log N ) q + O (1) (cid:17) + O (cid:16) N ( q log N ) q +2 (cid:17) · T ( q − . From the induction hypothesis, T ( q −
1) = O (cid:16) N q · (( q −
1) log N ) c ( q − (cid:17) . Assuming that q ≥
1, and that c is a large enough constant, we get that the running time is T ( q ) = O (cid:16) N q +1 · ( q log N ) c q (cid:17) , as required.In the remainder of this subsection we focus on the proof of Theorem 4.3. Proof of Theorem 4.3
One of the main technical tools in the proof of the theorem is composition of expanders that wediscuss next.
Suppose we are given a collection { G , . . . , G h } of disjoint graphs, where for all 1 ≤ i ≤ h , theset V ( G i ) of vertices, that is denoted by V i , has cardinality at least N . Let H be another graph,whose vertex set is { v , . . . , v h } . An N -composition of H with G , . . . , G h is another graph G ,whose vertex set is S hi =1 V i , and whose edge set consists of two subsets: set E = S hi =1 E ( G i ),and another set E of edges, defined as follows: for each edge e = ( v i , v j ) ∈ E ( H ), let M ( e )be an arbitrary matching of cardinality N between vertices of V i and vertices of V j . Then E = S e ∈ E ( H ) M ( e ). The following theorem shows that, if each of the graphs G , . . . , G h is a ψ -expander, and graph H is a ψ ′ -expander, then the resulting graph G is also an expander foran appropriately chosen expansion parameter. Theorem 4.4.
Let G , . . . , G h be a collection of h > graphs, such that for each ≤ i ≤ h , N ≤ | V ( G i ) | ≤ γN , and G i is a ψ -expander, for some N ≥ , γ ≥ , and < ψ ≤ . Let H beanother graph with vertex set { v , . . . , v h } , such that H is a ψ ′ -expander, and let ∆ be maximumvertex degree in H . Lastly, let G be a graph that is an N -composition of H with G , . . . , G h .Then graph G is a ψ ′′ -expander, for ψ ′′ = ψψ ′ / (16∆ γ ) .Proof. For convenience, for all 1 ≤ i ≤ h , we denote V ( G i ) by V i . Let ( A, B ) be any partitionof V ( G ). It is sufficient to prove that | E G ( A, B ) | ≥ ψ ′′ · min {| A | , | B |} .Consider any graph G i , for 1 ≤ i ≤ h . We say that G i is of type 1 if | V i ∩ A | > (cid:16) − γ (cid:17) | V i | ,and we say that it is of type 2 if | V i ∩ B | > (cid:16) − γ (cid:17) | V i | . Notice that a graph G i cannot belong26o both types simultaneously, and it is possible that it does not belong to either type. Let N be the number of type-1 graphs G i , and let N be the number of type-2 graphs. Assume w.l.o.g.that N ≤ N . Let S ⊆ V ( H ) contain all vertices v i , such that G i is a type-1 graph, so | S | = N .Since graph H is a ψ ′ -expander, | E H ( S, V ( H ) \ S ) | ≥ ψ ′ | S | .We partition the set A of vertices into two subsets: set A ′ contains all vertices that lie in type-1graphs G i , and set A ′′ contains all remaining vertices. Recall that graph G contains, for everyedge e = ( v i , v j ) ∈ E H ( S, V ( H ) \ S ), a collection M ( e ) of N edges, connecting vertices of V i tovertices of V j . Consider any such edge e = ( v i , v j ), with v i ∈ S . Since | V i ∩ A | ≥ (cid:16) − γ (cid:17) | V i | ,and | V i | ≤ γN , | V i ∩ B | ≤ | V i | γ ≤ N . Therefore, at least N/ M ( e ) have one endpoint in A ′ ; the other endpoint of each such edge must lie in A ′′ ∪ B . We conclude that | E G ( A ′ , A ′′ ∪ B ) | ≥ N ·| E H ( S,V ( H ) \ S ) | ≥ ψ ′ N | S | . Since every graph G i contains between N and γN vertices, we getthat | A ′ | ≤ γN | S | , and so | E G ( A ′ , A ′′ ∪ B ) | ≥ ψ ′ | A ′ | γ . Since maximum vertex degree in H is ∆,every vertex in A ′′ may be an endpoint of at most ∆ such edges.We now consider two cases. First, if | A ′′ | ≤ ψ ′ | A ′ | / (4∆ γ ), then | E G ( A ′ , A ′′ ) | ≤ ∆ | A ′′ | ≤ ψ ′ | A ′ | / (4 γ ). Therefore, | E G ( A, B ) | ≥ | E G ( A ′ , B ) | ≥ ψ ′ | A ′ | / (4 γ ) ≥ ψ ′ | A | / (8 γ ) ≥ ψ ′′ | A | .Lastly, assume that | A ′′ | > ψ ′ | A ′ | / (4∆ γ ), so | A ′′ | ≥ ψ ′ | A | / (8∆ γ ). Consider any graph G i that isnot a type-1 graph, so | V i ∩ A | ≤ (cid:16) − γ (cid:17) | V i | . If | V i ∩ A | ≤ | V i | /
2, then there are at least ψ | V i ∩ A | edges of G i in E G ( A, B ). Otherwise, there are at least ψ | V i ∩ B | edges of G i in E G ( A, B ). Since | V i ∩ B | ≥ | V i | / γ ≥ | V i ∩ A | / (2 γ ), the number of edges that G i contributes to E G ( A, B ) is atleast ψ | V i ∩ B | ≥ ψ | V i ∩ A | /γ . We conclude that | E G ( A, B ) | ≥ ψ | A ′′ | / (2 γ ) ≥ ψψ ′ | A | / (16∆ γ ) ≥ ψ ′′ | A | . We now provide an overview of the proof of Theorem 4.3, and set up some notation.In order to simplify the notation, we denote the input graph by G = ( V, E ), and we denote | V | = n and | E | = m ; recall that | E | = O ( n log n ). Let ˜ N ′ = N q − /
2, and let ˜ N = j n/ ˜ N ′ k ,so ˜ N ≤ N . Since N is an integral power of 2, ˜ N ′ is an even integer. Moreover, from ourassumption that n ≥ N q − /
2, we get that ˜ N ≥ V of vertices into ˜ N + 1 subsets V , . . . , V ˜ N , V ˜ N +1 , where sets V , . . . , V ˜ N have cardinality exactly ˜ N ′ each, and the last set, that we denote by Z = V ˜ N +1 has cardinalityless than ˜ N ′ . We call the vertices in Z the extra vertices .The algorithm consists of three steps. In the first step, we construct expanders H , . . . , H ˜ N ,where for all 1 ≤ i ≤ ˜ N , V ( H i ) = V i , that we attempt to embed into G . We will either succeedin embedding these expanders with a small congestion and a relatively small number of fakeedges, or we will compute the desired cut ( X, Y ) in G . In the second step, we construct anexpander H ′ whose vertex set is v , . . . , v ˜ N , where we think of vertex v i as representing theset V i of vertices of G . We will attempt to embed graph H ′ into G , with a small number offake edges and low congestion, where every edge e = ( v i , v j ) of H ′ is embedded into ˜ N ′ pathsconnecting vertices of V i to vertices of V j in G . If our algorithm fails to find such an embedding,then we will again produce the desired cut ( X, Y ) in G . If, over the course of the first two steps,the algorithm does not terminate with a cut ( X, Y ) in G , then we consider an expander H ∗ ,27btained by computing a ˜ N ′ -composition of H , . . . , H N and of H ′ , and then adding the verticesof Z , together with a matching connecting every vertex of Z to some vertex of V ∪ · · · ∪ V ˜ N tothe resulting graph. The algorithm from the first two steps has then computed an embeddingof H ∗ into G , with a relatively small number of fake edges. In our last step, we compute a largesubset S of vertices of G such that G [ S ] is a ψ q -expander, using Algorithm ExtractExpander from Lemma 2.9. We now proceed to describe each of the three steps in turn. Throughout thealgorithm, we use a parameter z = ψ q − nc log n , where c is a large enough constant, whose value willbe set later. The goal of this step is to construct a collection H = (cid:8) H , . . . , H ˜ N (cid:9) of expanders, where for1 ≤ i ≤ ˜ N , V ( H i ) = V i , and to compute an low-congestion embedding of all these expandersinto G + F , where F is a small set of fake edges for G . In other words, if we let H be thegraph obtained by taking a disjoint union of the graphs H , . . . , H ˜ N , and the set Z of isolatedvertices, then we will attempt to compute an embedding of H into G . We will either find suchan embedding, that uses relatively few fake edges, or we will return a cut ( X, Y ) of G with therequired properties. We summarize this step in the following lemma. Lemma 4.5.
There is a deterministic algorithm that either computes a cut ( X, Y ) in G with | X | , | Y | ≥ Ω (cid:16) ψ q − · n log n (cid:17) and Ψ G ( X, Y ) ≤ / ; or it constructs a collection H = (cid:8) H , . . . , H ˜ N (cid:9) of ˆ ψ -expanders, where for ≤ i ≤ N , V ( H i ) = V i , and ˆ ψ = ψ q − / , together with a set F ofat most O ( z log n ) fake edges, and an embedding P of the graph H = ( S i H i ) ∪ Z into G + F ,with congestion O (log n ) , such that every vertex of G is incident to at most O (log n ) edges of F . The running time of the algorithm is O (cid:0) N q +1 · poly log n (cid:1) + O ( N · log n ) · T ( q − .Proof. The construction of the graphs H , . . . , H ˜ N , and of their embedding into G is donegradually, by running ˜ N instances of the cut-matching game, in parallel. Initially, for each1 ≤ i ≤ ˜ N , we let the graph H i contain the set V i of vertices and no edges. Throughout thealgorithm, we denote by H = (cid:8) H , . . . , H ˜ N (cid:9) the current collection of the expanders we areconstructing. We partition H into two subsets: set H ′ of active graphs, and set H ′′ of inactive graphs. Initially, every graph H i is active, so H ′ = H and H ′′ = ∅ . Throughout the algorithm,for every inactive graph H i , we will maintain a subset S i ⊆ V i of at least ˜ N ′ / H i [ S i ] is a ψ q − -expander. Throughout the algorithm, we also let H denote thegraph obtained by taking the disjoint union of all graphs in H with a set Z of isolated vertices.We will maintain an embedding P of H into G throughout the algorithm. We will ensure that,throughout the algorithm, for all 1 ≤ i ≤ ˜ N , the maximum vertex degree in each graph H i is atmost c CMG log ˜ N ′ .At the beginning of the algorithm, for each 1 ≤ i ≤ ˜ N , graph H i contains the set V i of verticesand no edges, so graph H consists of the set V of vertices and no edges. The initial embeddingis P = ∅ , and every graph H i is active.As long as H ′ = ∅ , we perform iterations, where the j th iteration is executed as follows. Weapply algorithm A ( q −
1) to every graph H i ∈ H ′ separately. Observe that each such graphcontains ˜ N ′ ≤ N q − vertices, and has maximum vertex degree at most c CMG log ˜ N ′ . For eachsuch graph H i , if the outcome is a subset S i ⊆ V i of vertices, such that | S i | ≥ ˜ N ′ / H i [ S i ] is28 ψ q − -expander, then we add H i to the set H ′′ of inactive graphs, and store the set S i of verticeswith it. Let ˆ H ⊆ H ′ be the collection of all remaining active graphs, so for each graph H i ∈ ˆ H ,the algorithm has computed a cut ( A i , B i ) with | A i | , | B i | ≥ ˜ N ′ /
4, and | E H i ( A i , B i ) | ≤ ˜ N ′ / | A i | ≤ | B i | . Let ( A ′ i , B ′ i ) be any partition of V i with | A ′ i | = | B ′ i | , such that A i ⊆ A ′ i . We treat the partition ( A ′ i , B ′ i ) as the move of the cut player inthe cut-matching game corresponding to the graph H i .For convenience, we assume w.l.o.g. that ˆ H = { H , . . . , H k } . In order to implement the responseof the matching player, we apply Algorithm RouteOrCut from Corollary 3.7 to graph G ,the sets A ′ , B ′ , . . . , A ′ k , B ′ k of vertices, and parameters ψ = 1 /
100 and z (recall that we havedefined z = ψ q − nc log n for some large enough constant c ). We now consider two cases. If Algorithm RouteOrCut from Corollary 3.7 returns a cut (
X, Y ) of G with | X | , | Y | ≥ z/ G ( X, Y ) ≤ ψ , then we say that the current iteration terminates with a cut. In this case, we terminate thealgorithm, and return ( X, Y ) as its outcome; it is immediate to verify that this cut has therequired properties. In the second case, we obtain a partial routing ( M ′ = S ki =1 M ′ i , P ′ ) of thesets A ′ , B ′ , . . . , A ′ k , B ′ k of vertices, where | M ′ | ≥ k ˜ N ′ / − z (recall that for all i , | A ′ i | = | V i | / N ′ / O (∆ log n/ψ ) = O (log n ). We then saythat the current iteration has terminated with a routing.Consider now some index 1 ≤ i ≤ k , and let A ′′ i ⊆ A ′ i and B ′′ i ⊆ B ′ i be the subsets of vertices thatdo not participate in the matching M ′ i . Let M ′′ i be an arbitrary perfect matching between thevertices of A ′′ i and the vertices of B ′′ i , and let F i be a set of fake edges F i = { ( u, v ) | ( u, v ) ∈ M ′′ i } .For every pair ( u, v ) ∈ M ′′ i , we embed the pair ( u, v ) into the corresponding fake edge ( u, v ) ∈ F i .Let M ji = M ′ i ∪ M ′′ i . We add the edges of M ji to graph H i .Denote M j = S ki =1 M ji , and let F j = S ki =1 F i be the resulting set of fake edges; recall that | F j | ≤ z . Let P j be the embedding of all edges in M j that is obtained from the partial routing P ′ , by adding the embeddings of all fake edges to it. Observe that we have now obtained anembedding P j of all edges of M j into G + F j , with congestion O (log n ). We add the paths of P j to the embedding P of the current graph H , and continue to the next iteration.Our algorithm can therefore be viewed as running ˜ N parallel copies of the cut-matching game.From Theorem 2.5, the number of iterations is bounded by c CMG log ˜ N ′ , and so for every graph H i , its maximum vertex degree is always bounded by c CMG log ˜ N ′ . The algorithm terminatesonce all graphs H i become inactive. Recall that for each such graph H i , we are given a subset S i of its vertices, such that | S i | ≥ | V i | /
2, and H i [ S i ] is a ψ q − -expander. We perform one lastiteration, whose goal is to turn each graph H i into an expander, by adding a new set of edges toit, while simultaneously embedding these edges into the graph G together with a small numberof fake edges, or find a cut ( X, Y ) as required. Let r − H i becomes inactive. Last Iteration.
For each 1 ≤ i ≤ ˜ N , we let B i = S i and A i = V i \ S i , so that | A i | ≤ | B i | holds.We apply Algorithm RouteOrCut from Corollary 3.7 to graph G , the sets A , B , . . . , A ˜ N , B ˜ N of vertices, and parameters ψ = 1 /
100 and z . The remainder of the iteration is executedexactly as before. If Algorithm RouteOrCut from Corollary 3.7 returns a cut (
X, Y ) of G with | X | , | Y | ≥ z/ G ( X, Y ) ≤ ψ , then we terminate the algorithm and return this cut.Otherwise, we obtain a partial routing ( M ′ = S ˜ Ni =1 M ′ i , P ′ ) of the sets A , B , . . . , A ˜ N , B ˜ N of29ertices, where | M ′ | ≥ P ˜ Ni =1 | A i | − z , whose congestion is at most O (log n ) as before.Consider some index 1 ≤ i ≤ ˜ N , and let A ′ i ⊆ A i and B ′ i ⊆ B i be the subsets of vertices that donot participate in the matching M ′ i . Let M ′′ i be an arbitrary matching, in which every vertexof A ′ i is matched to some vertex of B ′ i , and let F i be a set of fake edges corresponding to thismatching M ′′ i , defined as before. For every pair e = ( u, v ) ∈ M ′′ i , we embed the fake edge e intothe path P ( e ) = ( e ). Let M ri = M ′ i ∪ M ′′ i . We add the edges of M ri to graph H i .Denote M r = S ˜ Ni =1 M ri , and let F r = S ˜ Ni =1 F i be the resulting set of fake edges; as before | F r | ≤ z . Let P r be the embedding of all edges in M r that is obtained from the partial routing P ′ , by adding the embeddings of all fake edges to it. As before, we have obtained an embedding P r of all edges of M r into G + F r , with congestion O (log n ). We add the paths of P r to theembedding P of the current graph H . Note that, from Observation 2.3 we are now guaranteedthat every graph H i ∈ H is a ψ q − / P j of edges is O (log n ), and, since thenumber of iterations is O (log n ), the embedding P causes congestion O (log n ). The total numberof fake edges in F = S rj =1 F j is O ( z log n ). Since each set F j of fake edges is a matching, everyvertex of G is incident to O (log n ) fake edges.We now analyze the running time of the algorithm. As observed before, the algorithm has O (log n ) iterations. In every iteration, we apply algorithm A ( q −
1) to ˜ N = O ( N ) graphs.Additionally, we use Algorithm RouteOrCut from Corollary 3.7, whose running time is˜ O ( k | E ( G ) | /ψ + kn/ψ ) = ˜ O ( kn ), where k is the number of vertex subsets. Since k ≤ |H| =˜ N ≤ N , this running time is bounded by ˜ O ( N n ) = ˜ O ( N q +1 ). Therefore, the total running timeof the algorithm is ˜ O (cid:0) N q +1 (cid:1) + O ( N log n ) · T ( q − We use Algorithm
ConstructExpander from Theorem 2.4, in order to construct, in time O ( ˜ N ), a ψ ∗ -expander H ′ = H ˜ N on ˜ N vertices, with ψ ∗ = Ψ( H ′ ) = Ω(1), such that maximumvertex degree in H ′ is at most 9. For convenience, we denote V ( H ′ ) = (cid:8) v , . . . , v ˜ N (cid:9) . The mainpart of this step is summarized in the following lemma. Lemma 4.6.
There is a deterministic algorithm, that either computes a cut ( X, Y ) in G with | X | , | Y | ≥ Ω (cid:16) ψ q − · n log n (cid:17) and Ψ G ( X, Y ) ≤ / ; or it computes a collection F ′ of at most z fakeedges in G , and, for every edge e = ( v i , v j ) ∈ E ( H ′ ) a set P ( e ) of ˜ N ′ paths in G + F ′ , such thatevery path in P ( e ) connects a vertex of V i to a vertex of V j , and the endpoints of the paths in P ( e ) are disjoint. Moreover, every vertex of G is incident to at most fake edges in F ′ , andevery edge of G ∪ F ′ participates in at most O (log n ) paths in S e ∈ E ( H ′ ) P ( e ) . The running timeof the algorithm is ˜ O (cid:0) N q +1 (cid:1) .Proof. Using a standard greedy algorithm, and the fact that the maximum vertex degree in H ′ is at most 9, we can partition the set E ( H ′ ) of edges into 17 matchings, M , . . . , M . We thenperform up to 17 iterations; in each iteration i , we will either compute a small set F i of fakeedges for G , and the sets P ( e ) of paths for all edges e ∈ M i , in graph G + F i , or we will computethe cut ( X, Y ) in G with the required properties.30n order to execute the i th iteration, we consider the set M i of edges of H ′ , and denote, forconvenience, M i = { e , . . . , e k i } . For each 1 ≤ j ≤ k i , if e j = ( v z , v z ′ ), then we define A j = V z and B j = V z ′ . Observe that | A i | = | B j | = ˜ N ′ , and the resulting vertex sets A , B , . . . , A k i , B k i are all disjoint.We apply Algorithm RouteOrCut from Corollary 3.7 to graph G , the sets A , B , . . . , A k i , B k i of vertices, and parameters ψ = 1 /
100 and z (as defined before, z = ψ q − nc log n ). If Algorithm RouteOrCut returns a cut (
X, Y ) of G with | X | , | Y | ≥ z/ G ( X, Y ) ≤ ψ , then weterminate the algorithm and return this cut; it is easy to verify that cut ( X, Y ) has all requiredproperties. In this case we say that the iteration terminates with a cut. Otherwise, we obtain apartial routing ( ˆ M i = S e ∈ M i ˆ M ( e ) , ˆ P i ) of the sets A , B , . . . , A k i , B k i of vertices, where | ˆ M i | ≥ P k i j =1 | A j | − z , whose congestion is at most O (∆ log n/ψ ) = O (log n ). In this case we saythat the iteration terminates with a routing. Consider now some edge e j ∈ M i . Let A ′ j ⊆ A j , B ′ j ⊆ B j be the subsets of vertices that do not participate in the matching ˆ M ( e j ). Let ˆ M ′ ( e j )be an arbitrary perfect matching between A ′ j and B ′ j , and let F ij be the corresponding set offake edges for graph G (so for every edge e ∈ ˆ M ′ ( e j ), we add an edge with the same endpointsto F ij ). Finally, set ˆ M ′′ ( e j ) = ˆ M j ∪ ˆ M ′ j . Let F i = S k i j =1 F ij ; recall that | F i | ≤ z . Let P ′ i bethe set of paths routing the edges of F i , where for each edge e ∈ F i , the corresponding path P ( e ) ∈ P ′ i consists of the edge e . Lastly, let ˆ P ′′ i = ˆ P i ∪ ˆ P ′ i . Note that ˆ P ′′ i is the routing of alledges in ˆ M ′′ i ∪ F i in graph G + F i , that causes edge-congestion at most O (log n ).If any iteration of the algorithm terminated with a cut, then we terminate the algorithm andreturn the corresponding cut. We assume from now on that every iteration of the algorithmterminated with a routing. Setting F ′ = S i =1 F i , we obtain the desired routing of the edges of H ′ in graph G + F ′ , with congestion O (log n ). Since, for every 1 ≤ i ≤
17, the edges of F i forma matching, every vertex of G is incident to at most 17 such edges.Recall that the running time of Algorithm RouteOrCut from Corollary 3.7 is ˜ O ( k | E ( G ) | /ψ + kn/ψ ) = ˜ O ( kn ) = ˜ O ( kN q ), where k is the number of pairs of sets that we need to route. Since k ≤ | V ( H ′ ) | ≤ ˜ N ≤ O ( N ), and the number of iterations is at most 17, we get that the runningtime of the algorithm is ˜ O (cid:0) N q +1 (cid:1) .Finally, we need the following claim, in order to connect the set Z of extra vertices to theremaining vertices of G . Claim 4.7.
There is a deterministic algorithm, that either computes a cut ( X, Y ) in G with | X | , | Y | ≥ Ω (cid:16) ψ q − · n log n (cid:17) and Ψ G ( X, Y ) ≤ / ; or it computes a matching M connecting everyvertex of Z to a distinct vertex of V ( G ) \ Z , a collection F ′′ of at most z fake edges in G , anda set P ′′ = { P ( e ) | e ∈ M } of paths in G + F ′′ , such that, for each edge e = ( u, v ) ∈ M , path P ( e ) connects u to v . Moreover, every vertex of G is incident to at most one fake edge in F ′′ ,and every edge of G ∪ F ′′ participates in at most O (log n ) paths in P ′′ . The running time ofthe algorithm is ˜ O ( N q ) .Proof. We apply Algorithm
RouteOrCut from Corollary 3.7 to graph G , the sets A = Z , B = V ( G ) \ Z of vertices, parameter ψ = 1 / z . If the outcome of Algorithm RouteOrCut is a cut (
X, Y ) of G with | X | , | Y | ≥ z/ G ( X, Y ) ≤ ψ , then we return thiscut; it is immediate to verify that cut ( X, Y ) has the required properties. Otherwise, we obtain31 routing ( M ′ , P ′ ) of the sets A , B , with | M ′ | ≥ | Z | − z . The congestion of the routing is atmost O (∆ log n/ψ ) = O (log n ). We let Z ′ ⊆ Z be the set of all vertices of Z that do notparticipate in the matching M ′ , and we let M ′′ be an arbitrary matching that matches everyvertex of Z ′ to a distinct vertex of V ( G ) \ Z , such that M = M ′ ∪ M ′′ is a matching; such a set M ′′ exists since Z contains at most half the vertices of G . We let F ′′ be a set of fake edges for G corresponding to the edges of M ′′ , so every edge e = ( u, v ) ∈ M ′′ is also added to F ′′ . We let P ( e ) be the path that only consists of the edge e , and we treat P ( e ) as the embedding of e .We now obtained a set F ′′ of at most z fake edges, and every vertex of G is incident to at mostone such fake edge. We also obtained an embedding P ′′ = P ′ ∪ { P ( e ) | e ∈ F ′′ } of M into G with congestion O (log n ). The running time of Algorithm RouteOrCut is ˜ O (∆ | E ( G ) | /ψ + n/ψ ) = ˜ O ( n ) = ˜ O ( N q ).If the algorithm from Lemma 4.6 or the algorithm from Claim 4.7 produce a cut ( X, Y ) in G with | X | , | Y | ≥ Ω (cid:16) ψ q − · n log n (cid:17) and Ψ G ( X, Y ) ≤ / H ∗ : we start by letting H ∗ be a disjoint unionof the graphs H , . . . , H ˜ N constructed in the first step. Additionally, for every edge e = ( v i , v j ) ∈ E ( H ′ ), for every path P ∈ P ( e ), whose endpoints are x ∈ V i , y ∈ V j , we add the edge ( x, y )to E ( H ∗ ). It is immediate to verify that graph H ∗ is an N ′ -composition of H , . . . , H ˜ N , andgraph H ′ . Recall that for all 1 ≤ i ≤ N , graph H i is a ψ q − / H ′ is a ψ ∗ -expander, for some ψ ∗ = Ω(1). The maximum vertex degree in H ′ is bounded by 9. Therefore,from Theorem 4.4, graph H ∗ is a ψ ′ -expander, for ψ ′ = ψ q − ψ ∗ /O (log n ) = Ω( ψ q − / log n ).Note that the maximum vertex degree in H ∗ is O (log n ). Lastly, we add to graph H ∗ the set Z of extra vertices as isolated vertices, and the matching M that was computed in Claim 4.7.Recall that, from Observation 2.3, graph H ∗ is a ψ ′ / ψ ′ = Ω( ψ q − / log n ); tosimplify the notation, we say that H ∗ is a ψ ′ -expander, adjusting the value of ψ ′ accordingly.Let F ∗ = F ∪ F ′ ∪ F ′′ be the union of the sets of fake edges computed by the algorithms fromLemma 4.5, Lemma 4.6, and Claim 4.7. Recall that | F ∗ | = O ( z log n ), where z = ψ q − nc log n forsome large enough constant c .We denote by ∆ G the maximum vertex degree of G + F ∗ . Since the set F ∗ of fake edges consistsof O (log n ) matchings, ∆ G = O (log n ).By combining the outcomes of the algorithms from Lemma 4.5, Lemma 4.6, and Claim 4.7, weobtain an embedding of H ∗ into G + F ∗ with congestion at most O (log n ). The maximumvertex degree in H ∗ , that we denote by ∆ H ∗ , is O (log n ). The maximum vertex degree in G + F ∗ , that we denote by ∆ G , is O (log n ). Note that the running time of the algorithm so faris O (cid:0) N q +1 · poly log n (cid:1) + O ( N · log n ) · T ( q − In this step, we apply Algorithm
ExtractExpander from Lemma 2.9 to graphs G and H ∗ , theset F ∗ of fake edges, and the embedding of H ∗ into G + F ∗ with congestion at most η = O (log n ).We need first to verify that | F ∗ | ≤ ψ ′ n G η . Recall that ψ ′ = Ω( ψ q − / log n ), ∆ G = O (log n ),and η = O (log n ). Therefore, ψ ′ n G η ≥ Ω (cid:16) ψ q − n log n (cid:17) , while | F ∗ | ≤ O (log n ) · ψ q − nc log n . Setting theconstant c to be large enough, we can ensure that the inequality indeed holds.32ecall that Algorithm ExtractExpander from Lemma 2.9 computes a subgraph G ′ ⊆ G ,that is a ψ ′′ -expander, for ψ ′′ ≥ ψ ′ G · η = Ω (cid:16) ψ q − log n (cid:17) , as ψ ′ = Ω( ψ q − / log n ). Recall also that ψ q − = 1 / (( q −
1) log N ) q − , and n ≤ N q . Therefore: ψ ′′ ≥ Ω (cid:18) q −
1) log N ) q − · ( q log N ) (cid:19) ≥ q log N ) q = ψ q . Note that the number of vertices in G ′ is at least: n − | F ∗ | ηψ ′ . Since | F ∗ | ηψ ′ ≤ O (cid:16) z log nψ q − (cid:17) and z = ψ q − nc log n , letting c be a large enough constant, we can ensure that | V ( G ′ ) | ≥ n/
3, as required.The running time of Algorithm
ExtractExpander is ˜ O ( | E ( G ) | ∆ G · η/ψ ′ ) = ˜ O ( n/ψ q − ) =˜ O ( N q · ( q log N ) q ).By combining all three steps together, we obtain total running time: O (cid:16) N q +1 · ( q log N ) q + O (1) (cid:17) + O ( N · log n ) · T ( q − BalCutPrune
In this section we prove the following:
Theorem 5.1.
There is a universal constant c , and a deterministic algorithm, that, given an n -vertex m -edge graph G = ( V, E ) , a parameter < φ < , and another parameter r ≤ c log m ,returns a cut ( A, B ) in G with | E G ( A, B ) | ≤ φ · Vol( G ) , such that: • either Vol G ( A ) , Vol G ( B ) ≥ Vol( G ) / ; or • Vol G ( A ) ≥ · Vol( G ) , and the graph G [ A ] has conductance φ ′ ≥ φ/ log O ( r ) m .The running time of the algorithm is O (cid:16) m O (1 /r ) · (log m ) O ( r ) /φ (cid:17) . From the definition of the
BalCutPrune problem from Definition 1.1, this implies a slower versionof Theorem 1.2 when the conductance parameter φ is low: Corollary 5.2.
There is a deterministic algorithm, that, given a graph G with m edges, andparameters φ ∈ (0 , , ≤ r ≤ O (log m ) , and α = (log m ) O ( r ) , computes an α -approximatesolution to instance ( G, φ ) of BalCutPrune in time O (cid:16) m O (1 /r ) · (log m ) O ( r ) /φ (cid:17) . While the above algorithm can significantly slower than the one from Theorem 1.2 when theconductance parameter φ is low, many of our applications only need to solve the BalCutPrune problem for relatively high values of φ , and so the algorithm from Theorem 5.1 is sufficient forthem. In particular, we will use this algorithm in order to obtain fast deterministic approxima-tion algorithms for max s - t flow, which will then in turn be used in order to obtain the full proofof Theorem 1.2. The remainder of this section is dedicated to the proof of Theorem 5.1.Two key ingredients in the proof are an extension of Theorem 1.6 to higher sparsity regime, anda degree reduction procedure, that are discussed in the next two subsections, respectively.33 .1 Extension of Theorem 1.6 to Smaller Sparsity In this subsection we prove the following lemma.
Lemma 5.3.
There is a deterministic algorithm, that, given an n -vertex graph G = ( V, E ) , withmaximum vertex degree ∆ , parameters < ψ < , z ≥ and r ≥ , such that n /r ≥ N (where N is the constant from Theorem 1.6), returns one of the following: • either a cut ( X, Y ) in G with | X | , | Y | ≥ z/ ∆ and Ψ G ( X, Y ) ≤ ψ ; or • a graph H with V ( H ) = V ( G ) , that is a ψ r ( n ) -expander (for ψ r ( n ) = 1 / (log n ) O ( r ) ),together with a set F of at most O ( z log n ) fake edges for G , and an embedding of H into G + F with congestion at most O (∆ log n/ψ ) , such that every vertex of G is incident to atmost O (log n ) edges of F .The running time of the algorithm is ˜ O (cid:16) n O (1 /r ) · (log n ) O ( r ) + n ∆ /ψ (cid:17) .Proof. If the number of vertices in graph G is odd, then we add an additional new vertex v ,and we connect it to an arbitrary vertex of G with a fake edge. For simplicity, the new numberof vertices is still denoted by n .Our algorithm runs the cut-matching game, as follows. We start with a graph H , whose vertexset is V , and whose edge set is empty, and then perform iterations. Throughout the algorithm,we will ensure that the maximum vertex degree in H is O (log n ).Iteration i is executed as follows. We apply Algorithm CutOrCertify from Theorem 1.6 tograph H . We now consider two cases. In the first case, the outcome is a cut ( A i , B i ) in H , with | A i | , | B i | ≥ n/ | E H ( A i , B i ) | ≤ n/ A ′ i , B ′ i ) be any partition of V with A i ⊆ A ′ i , B i ⊆ B ′ i , and | A ′ i | = | B ′ i | . We apply Algorithm RouteOrCut-1Pair from Theorem 3.8 tograph G , with the vertex sets A ′ i , B ′ i , and parameters z and ψ . If the outcome is a cut ( X, Y )in G with | X | , | Y | ≥ z/ ∆ and Ψ G ( X, Y ) ≤ ψ , then we terminate the algorithm and return thiscut as its outcome. Otherwise, we obtain a partial routing ( M i , P i ) of the sets A ′ i , B ′ i , of valueat least | A ′ i | − z , that causes congestion at most 4∆ /ψ . Let A ′′ i ⊆ A ′ i , B ′′ i ⊆ B ′ i be subsets ofvertices that do not participate in the matching M i . Let M ′ i be an arbitrary perfect matchingbetween A ′′ i and B ′′ i , and let F i be a set of fake edges corresponding to the matching M ′ i (soevery edge in the matching becomes a fake edge). For every edge e ∈ F i , we also let P ( e ) be apath consisting of only the fake edge e . Let M ′′ i = M i ∪ M ′ i , and let P ′ i = P i ∪ { P ( e ) | e ∈ F i } .Then M ′′ i is a perfect matching between A ′ i and B ′ i , and P ′ i is a routing of this matching in G ∪ F i , with congestion at most 4∆ /ψ . We add the edges of M ′′ i to H , and continue to the nextiteration.Consider now the second case, where the outcome of Algorithm CutOrCertify from Theo-rem 1.6 is a subset S ⊆ V of at least n/ G [ S ]) ≥ ψ r ( n ). Let i ∗ be theindex of the current iteration. We then let B i ∗ = S and A i ∗ = V \ S ; note that | A i ∗ | ≤ | B i ∗ | musthold. We again employ Algorithm RouteOrCut-1Pair from Theorem 3.8, with the vertexsets A i ∗ , B i ∗ , and parameters z and ψ . If the outcome is a cut ( X, Y ) in G with | X | , | Y | ≥ z/ ∆and Ψ G ( X, Y ) ≤ ψ , then we terminate the algorithm and return this cut as its outcome. Other-wise, we obtain a partial routing ( M i ∗ , P i ∗ ) of the sets A i ∗ , B i ∗ , of value at least | A i ∗ | − z , that34auses congestion at most 4∆ /ψ . As before, we let A ′ i ∗ ⊆ A i ∗ , B ′ i ∗ ⊆ B i ∗ be subsets of verticesthat do not participate in the matching M i ∗ . Let M ′ i ∗ be an arbitrary matching, that matchesevery vertex of A ′ i ∗ to some vertex of B ′ i ∗ , and let F i ∗ be a set of fake edges corresponding tothe matching M ′ i ∗ . As before, for every edge e ∈ F i ∗ , we let P ( e ) be a path consisting of onlythe fake edge e . Let M ′′ i ∗ = M i ∗ ∪ M ′ i ∗ , and let P ′ i ∗ = P i ∗ ∪ { P ( e ) | e ∈ F i ∗ } . Then M ′′ i ∗ matchesevery vertex of A i ∗ to a distinct vertex of B i ∗ , and P ′ i ∗ is a routing of this matching in G ∪ F i ∗ ,with congestion at most 4∆ /ψ . We add the edges of M ′′ i ∗ to H , and terminate the algorithm.Observe that, if the algorithm never terminates with a cut ( X, Y ) with | X | , | Y | ≥ z/ ∆ andΨ G ′ ( X, Y ) ≤ ψ , then, from Observation 2.3, the final graph H is a ψ r ( n ) / F = S i ∗ i =1 F i , together with an additional fake edge incident to v if the initial numberof vertices in G was odd, and P = S i ∗ i =1 P ′ i , then P is an embedding of H into G + F . FromTheorem 2.5, the number of iterations in the algorithm is bounded by O (log n ). Since, for all i ,edge set F i is a matching, every vertex of G is incident to O (log n ) edges of F . Since every set F i contains at most z edges, | F | = O ( z log n ). Lastly, since every set P ′ i of paths causes congestion O (∆ /ψ ), the paths in P cause congestion O (∆ log n/ψ ). It now remains to bound the runningtime of the algorithm.The algorithm performs O (log n ) iterations. Each iteration requires running the Algorithm CutOrCertify from Theorem 1.6, which takes time O (cid:16) n O (1 /r ) · (log n ) O ( r ) (cid:17) , and Algorithm RouteOrCut-1Pair from Theorem 3.8, that takes time ˜ O (cid:0) n ∆ /ψ (cid:1) . Therefore, the totalrunning time of the algorithm is: ˜ O (cid:16) n O (1 /r ) · (log n ) O ( r ) + n ∆ /ψ (cid:17) . Assume that we are given a graph G = ( V, E ) with | V | = n and | E | = m , that we view as aninput to the BalCutPrune problem. In this subsection we show a deterministic algorithm, thatwe call
ReduceDegree , that has running time O ( m ), and transforms G into a bounded-degreegraph ˆ G . We also provide an algorithm that transforms any sparse balanced cut in a subgraphof ˆ G into a “nice” cut, that corresponds to a sparse balanced cut in a subgraph of G .We first describe Algorithm ReduceDegree for constructing the graph ˆ G . For convenience,we denote V = { v , . . . , v n } . For every vertex v i ∈ V , we let deg( v i ) denote the degree of v i in G , and we let n e ( v i ) , . . . , e deg( v i ) ( v i ) o be the set of edges incident to v , indexed in an arbitraryorder. For every vertex v i ∈ V , we use Algorithm ConstructExpander from Theorem 2.4 toconstruct a graph H i on a set V i of deg( v i ) vertices, that is an α -expander, for some constant α , such that the maximum vertex degree in H i is at most 9. Recall that the running timeof the algorithm for constructing H i is O (deg( v i )). We denote the vertices of H i by V i = n u ( v i ) , . . . , u deg( v i ) ( v i ) o .In order to obtain the final graph ˆ G , we start with a disjoint union of all graphs in { H i | v i ∈ V } .All edges lying in such graphs H i are called type-1 edges . Additionally, we add to ˆ G a collectionof type-2 edges, defined as follows. Consider any edge e = ( v, v ′ ) ∈ E , and assume that e = e j ( v ) = e j ′ ( v ′ ) (that is, e is the j th edge incident to v and it is the j ′ th edge incident to v ′ ).We then let ˆ e be the edge ( u j ( v ) , u j ′ ( v )). For every edge e ∈ E , we add the corresponding newedge ˆ e to graph ˆ G as a type-2 edge. This concludes the construction of the graph ˆ G , that we35enote by ˆ G = ( ˆ V , ˆ E ). Note that the maximum vertex degree in ˆ G is at most 10, and | ˆ V | = 2 m .Moreover, the running time of the algorithm for constructing the graph ˆ G is O ( m ).We say that a subset S ⊆ ˆ V of vertices is canonical iff for every vertex v i ∈ V , either V i ⊆ S ,or V i ∩ S = ∅ . Similarly, we say that a cut ( X, Y ) in a subgraph of ˆ G is canonical iff each of X, Y is a canonical subset of ˆ V . The following lemma allows us to convert an arbitrary sparsebalanced cuts in a subgraph of ˆ G into a canonical one. Lemma 5.4.
Let α > be the constant from Theorem 2.4. There is a deterministic algorithm,that we call MakeCanonical , that, given a subgraph ˆ G ′ ⊆ ˆ G , where V ( ˆ G ′ ) is a canonicalvertex set, and a cut ( A, B ) in ˆ G ′ , computes, in time O ( m ) , a canonical cut ( A ′ , B ′ ) in ˆ G ′ , suchthat | A ′ | ≥ | A | / , | B ′ | ≥ | B | / , and moreover, if | E ˆ G ( A, B ) | ≤ ψ min {| A | , | B |} , for ψ ≤ α / ,then | E ˆ G ( A ′ , B ′ ) | ≤ O ( | E ˆ G ( A, B ) | ) .Proof. We start with the cut ( ˆ A, ˆ B ) = ( A, B ) in graph ˆ G ′ and then gradually modify it, byprocessing the vertices of V ( G ) one-by-one. When a vertex v i is processed, if V i ∩ V ( ˆ G ′ ) = ∅ ,we move all vertices of V i to either ˆ A or ˆ B . Once every vertex of V ( G ) is processed, we obtainthe final cut ( A ′ , B ′ ), that will serve as the output of the algorithm.Consider an iteration when some vertex v i ∈ V ( G ) is processed, and assume that V i ⊆ V ( ˆ G ′ ).Denote A i = A ∩ V i and B i = B ∩ V i . If | A i | ≥ | B i | , then we move all vertices of B i to ˆ A ,and otherwise we move all vertices of A i to ˆ B . Assume w.l.o.g. that the latter happened (theother case is symmetric). Note that the only new edges that are added to the cut E ˆ G ( ˆ A, ˆ B )are type-2 edges that are incident to the vertices of A i . The number of such edges is boundedby | A i | . The edges of E H i ( A i , B i ) belonged to the cut E ˆ G ( ˆ A, ˆ B ) before the current iteration,but they do not belong to the cut at the end of the iteration. Since H i is an α -expander,we get that | A i | ≤ | E H i ( A i , B i ) | /α . Therefore, the increase in | E ˆ G ( ˆ A, ˆ B ) | , due to the currentiteration is bounded by | E H i ( A i , B i ) | /α . We charge the edges of E H i ( A i , B i ) for this increase;note that these edges will never be charged again. The algorithm terminates once all verticesof V ( G ) are processed. Let ( A ′ , B ′ ) denote the final cut ( ˆ A, ˆ B ). From the above discussion, weare guaranteed that | E ˆ G ( A ′ , B ′ ) | ≤ | E ˆ G ( A, B ) | + P v i ∈ V ( G ) | E H i ( A i , B i ) | /α ≤ O ( | E ˆ G ( A, B ) | ).Next, we claim that | A ′ | ≥ | A | / | B ′ | ≥ | B | /
2. We prove this for | A ′ | ; the prooffor | B ′ | is symmetric. Indeed, assume otherwise. Let V ′ ⊆ V be the set of all vertices v i ,such that, when the algorithm processed v i , the vertices of A i were moved from ˆ A to ˆ B , andlet n i = | A i | . Then P v i ∈ V ′ n i > | A | / v i ∈ V ′ , | E H i ( A i , B i ) | ≥ α | A i | = α n i must hold. Therefore, graph H i contributed at least α n i edgesto the original cut E ˆ G ( A, B ). Since we are guaranteed that | E ˆ G ( A, B ) | ≤ ψ · | A | , we get that P v i ∈ V ′ α n i ≤ ψ · | A | , and so P v i ∈ V ′ n i ≤ ψ · | A | /α ≤ | A | /
2, since we have assumed that ψ ≤ α /
2. But this contradicts the fact that we established before, that P v ∈ V ′ n i > | A | / We prove the following theorem, from which Theorem 5.1 immediately follows.
Theorem 5.5.
There is a universal constant N ′ , and a deterministic algorithm, that, given an n -vertex m -edge graph G = ( V, E ) , a parameter < φ < , and another parameter r ≥ , suchthat m /r ≥ N ′ , returns a cut ( A, B ) in G with | E G ( A, B ) | ≤ φ · Vol( G ) , such that: either Vol G ( A ) , Vol G ( B ) ≥ Vol( G ) / ; or • Vol G ( A ) ≥ · Vol( G ) , and the graph G [ A ] has conductance φ ′ ≥ φ/ log O ( r ) m .The running time of the algorithm is O (cid:16) m O (1 /r ) · (log m ) O ( r ) /φ (cid:17) . In order to complete the proof of Theorem 5.1, we let c be a large enough constant, so that m / ( c log m ) ≥ N ′ holds. We then apply the algorithm from Theorem 5.5 to the input graph G and the parameter r . In the remainder of this section we focus on the proof of Theorem 5.5. Proof of Theorem 5.5.
We denote by ψ r ( n ) = 1 / log O ( r ) n the parameter from Theorem 1.6(that is, when Algorithm CutOrCertify from Theorem 1.6 returns a set S of at least n/ G [ S ]) ≥ ψ r ( n ) holds). Throughout the proof, we use two parameters: ψ = φ/ ˆ c ,and z = φm ˆ c (log m ) ˆ cr , where ˆ c is a large constant to be set later. We also set N ′ = 4 N , where N is the universal constants from Theorem 1.6.We start by using Algorithm ReduceDegree described in Section 5.2, in order to construct,in time O ( m ), a graph ˆ G whose maximum vertex degree is bounded by 10, and | V ( ˆ G ) | = 2 m .Denote V ( G ) = { v , . . . , v n } . Recall that graph ˆ G is constructed from graph G by replacingeach vertex v i with an α -expander H i on deg G ( v i ) vertices, where α = Θ(1). For convenience,we denote the set of vertices of H i by V i . Therefore, V ( ˆ G ) is a union of the sets V , . . . , V n ofvertices. Consider now some subset S of vertices of ˆ G . Recall that we say that S is a canonical vertex set iff for every 1 ≤ i ≤ n , either V i ⊆ S or V i ∩ S = ∅ holds.The algorithm performs a number of iterations. We maintain a subgraph ˆ G ′ ⊆ ˆ G ; at thebeginning of the algorithm, ˆ G ′ = ˆ G . In the i th iteration, we compute a canonical subset S i ⊆ V ( ˆ G ′ ) of vertices, and then update the graph ˆ G ′ , by deleting the vertices of S i from it.The iterations are performed as long as | S i S i | < | V ( ˆ G ) | / i th iteration, we consider the current graph ˆ G ′ , denoting | V ( ˆ G ′ ) | = n ′ .Note that, since we assume that | S i ′
3, we get that n ′ ≥ | V ( ˆ G ) | /
3. From ourchoice of parameter N ′ , we are guaranteed that ( n ′ ) /r ≥ N . We can now apply Lemma 5.3to graph ˆ G ′ , with the parameters r , ψ and z . Recall that the maximum vertex degree in ˆ G ′ is∆ ≤
10. Assume first that the outcome is a cut (
X, Y ) in ˆ G ′ with | X | , | Y | ≥ z/ ∆ ≥ z/
10 andΨ ˆ G ′ ( X, Y ) ≤ ψ . We say that the iteration terminates with a cut in this case. By setting ˆ c to be alarge enough constant, we can ensure that ψ ≤ α / α is the constant from Theorem 2.4.We use the algorithm MakeCanonical from Lemma 5.4 to compute, in time O ( m ), a canonicalpartition ( X ′ , Y ′ ) of V ( ˆ G ′ ), such that | X ′ | , | Y ′ | ≥ Ω( z ), and | E ˆ G ′ ( X ′ , Y ′ ) | ≤ O ( | E ˆ G ′ ( X, Y ) | ).Assume w.l.o.g. that | X ′ | ≤ | Y ′ | . We are then guaranteed that | X ′ | ≥ Ω( z ), and that for someconstant µ , | E ˆ G ′ ( X ′ , Y ′ ) | ≤ µψ | X ′ | , or equivalently, Ψ ˆ G ′ ( X ′ , Y ′ ) ≤ µψ . We set S i = X ′ , deletethe vertices of S i from ˆ G ′ , and continue to the next iteration. Observe that set V ( ˆ G ′ ) of verticesremains canonical. Otherwise, the outcome of Lemma 5.3 is a graph H with V ( H ) = V ( ˆ G ′ ),that is a ψ r ( n ′ )-expander, together with a set F of at most O ( z log n ) fake edges for ˆ G ′ , and anembedding of H into ˆ G ′ + F with congestion at most O (log m/ψ ), such that every vertex of ˆ G ′ isincident to at most O (log m ) edges of F . In this case we say that the iteration terminates withan expander. If an iteration terminates with an expander, then the whole algorithm terminates.Let i denote the index of the last iteration of the algorithm that terminated with a cut. Recallthat one of the following two cases must hold: 37 (Case 1): the algorithm had exactly i iterations, every iteration terminated with a cut,and | S i ′ ≤ i S i ′ | ≥ | V ( ˆ G ) | / • (Case 2): the algorithm had ( i + 1) iterations, the first i th iterations terminated with cuts,and the last iteration terminated with an expander.In either case, let S = S ii ′ =1 S i ′ . Then S is a canonical vertex set for ˆ G , and moreover, it is easyto verify that: | E ˆ G ( S, S ) | ≤ µψ | S | ≤ µψ | V ( ˆ G ) | . (1)Assume first that Case 1 happened. Consider the partition ( A ′ , B ′ ) of V ( ˆ G ), where A ′ = S and B ′ = V ( ˆ G ) \ S . Recall that | S i ′
3. Let ( X i , Y i ) be the cut that was returned by Lemma 5.3,and let ( X ′ i , Y ′ i ) be the canonical cut that we obtained in ˆ G ′ , so that S i = X ′ i . Recall that | X ′ i | ≤ | Y ′ i | . It follows that | Y ′ i | ≥ | V ( ˆ G ) | /
3, and | S i ′ ≤ i S i ′ | ≥ | V ( ˆ G ) | /
3. Since A ′ = S i ′ ≤ i S i ′ and B ′ = Y ′ i , we get that | A ′ | , | B ′ | ≥ | V ( ˆ G ) | /
3. From Equation (1), | E ˆ G ( A ′ , B ′ ) | ≤ ψµ | V ( ˆ G ) | .Lastly, we obtain a cut ( A, B ) of V ( G ) as follows. For every vertex v i ∈ V ( G ), if V i ⊆ A ′ , thenwe add v i to A , and otherwise we add it to B . Since, for every 1 ≤ i ≤ n , | V i | = deg G ( v i ), itis easy to verify that Vol( A ) = | A ′ | ≥ | V ( ˆ G ) | / G ) /
3, and similarly Vol( B ) ≥ Vol( G ) / | E G ( A, B ) | = | E ˆ G ( A ′ , B ′ ) | ≤ µψ | V ( ˆ G ) | = µψ · Vol( G ). Since ψ = φ/ ˆ c , by letting ˆ c be a large enough constant, we can ensure that | E G ( A, B ) | ≤ φ · Vol( G ).We return the cut ( A, B ) as the outcome of the algorithm.Assume now that Case 2 happened. Let ˆ G i +1 denote the graph ˆ G ′ that served as input tothe last iteration. Recall that in this last iteration, the algorithm from Lemma 5.3 returned agraph H with V ( H ) = V ( ˆ G i +1 ), that is a ψ r ( n ′ )-expander, where n ′ = | V ( ˆ G i +1 | ≥ | V ( ˆ G ) | / F of at most O ( z log n ) fake edges for ˆ G i +1 , and an embedding of H intoˆ G i +1 + F with congestion at most O (log m/ψ ), such that every vertex of ˆ G i +1 is incident toat most O (log m ) edges of F . Let ˆ G ′′ be the graph obtained from ˆ G i +1 , by adding the edgesof F to it. Then graph H embeds into ˆ G ′′ with congestion at most O (log m/ψ ), and so, fromLemma 2.8, graph ˆ G ′′ is a ψ ′ -expander, for ψ ′ = Ω( ψ r ( n ′ ) · ψ/ log m ) = Ω (cid:16) φ/ (log m ) O ( r ) (cid:17) .Recall that all vertex sets S , . . . , S i are canonical; therefore, the set V ( ˆ G ′′ ) of vertices is alsocanonical. Let G ′′ be the graph obtained from ˆ G ′′ as follows. For every vertex v j ∈ V ( G ), if V j ⊆ V ( ˆ G ′′ ), then we contract the vertices of V j into a single vertex v j , and remove all self loops.Let A ′ = V ( G ′′ ). It is easy to verify that G ′′ can be obtained from G [ A ′ ], by adding at most O ( z log m ) edges to it – the edges corresponding to the fake edges in F . Moreover, Vol( A ′ ) = | V ( ˆ G ) | − | S | ≥ | V ( ˆ G ) | / ≥ G ) /
3. It is also easy to verify that G ′′ has conductance atleast ψ ′ . Indeed, consider any cut ( X, Y ) in G ′′ . This cut naturally defines a cut ( X ′ , Y ′ ) inˆ G ′′ : for every vertex v i ∈ A ′ , if v i ∈ X , then we add all vertices of V i to X ′ , and otherwisewe add them to Y ′ . Then | X ′ | = Vol G ( X ) ≥ Vol G ′′ ( X ), | Y ′ | = Vol G ( Y ) ≥ Vol G ′′ ( Y ), and | E ˆ G ′′ ( X ′ , Y ′ ) | = | E G ′′ ( X, Y ) | . Since graph ˆ G ′′ is a ψ ′ -expander, we get that | E G ′′ ( X, Y ) | ≥| E ˆ G ′′ ( X ′ , Y ′ ) | ≥ ψ ′ min {| X ′ | , | Y ′ |} ≥ ψ ′ min { Vol G ′′ ( X ) , Vol G ′′ ( Y ) } .In our last step, we get rid of the fake edges in G ′′ by applying Theorem 2.6 to it, with conduc-tance parameter ψ ′ , and the set F of fake edges; (recall that | F | = O ( z log n ), and z = φm ˆ c (log m ) ˆ cr c ). In order to be able to use the theorem, we need to verifythat | F | ≤ ψ ′ · | E ( G ′′ ) | /
10. Since ψ ′ = Ω (cid:16) φ/ (log m ) O ( r ) (cid:17) , and | E ( G ′′ ) | ≥ Ω( m ), by letting ˆ c be a large enough constant, we can ensure that this condition holds. Applying Theorem 2.6 tograph G ′′ , with conductance parameter ψ ′ , and the set F of fake edges, we obtain a subgraph G ′ ⊆ G ′′ \ F , of conductance at least ψ ′ / (cid:16) φ/ (log m ) O ( r ) (cid:17) . Moreover, if we denote by A = V ( G ′ ) and ˜ B = V ( G ′′ ) \ V ( G ′ ), then | E G ′′ ( A, ˜ B ) | ≤ k and:Vol G ′′ ( ˜ B ) ≤ k/ψ ′ ≤ O (cid:16) k · (log m ) O ( r ) /φ (cid:17) , (2)where k = | F | = O ( z log n ) is the number of the fake edges. The running time of the algorithmfrom Theorem 2.6 is ˜ O ( m/ψ ′ ) = O (cid:16) m (log m ) O ( r ) /φ (cid:17) . Let B = V ( G ) \ A . The algorithm thenreturns the cut ( A, B ). We now verify that the cut has all required properties. We have alreadyestablished that G [ A ] has conductance at least φ/ (log m ) O ( r ) .Let ˜ S = B \ ˜ B . Then equivalently, we can obtain the set ˜ S ⊆ V ( G ) of vertices from the set S ⊆ V ( ˆ G ) of vertices (recall that S = S ii ′ =1 S i ′ ) by adding to ˜ S every vertex v j ∈ V ( G ) with V j ⊆ S . Since, from Equation (1), | E ˆ G ( S, S ) | ≤ µψ | V ( ˆ G ) | for some constant µ , it is easy toverify that: | E G ( ˜ S, V ( G ) \ ˜ S ) | ≤ µψ · Vol( G ) = µφ · Vol( G ) / ˆ c. (3)From the above discussion, we are also guaranteed that | E G ′′ ( A, ˜ B ) | ≤ | F | ≤ O ( z log n ). Since z = φm ˆ c (log m ) ˆ cr , by letting ˆ c be a large enough constant, we can ensure that | E G ′′ ( A, ˜ B ) | <φm/ ≤ φ Vol( G ) / | E G ( A, B ) | ≤ | E G ( A, ˜ B ) | + | E G ( ˜ S, V ( G ) \ ˜ S ) | ≤ φ · Vol( G ) /
100 + φµ · Vol( G ) / ˆ c ≤ φ · Vol( G ) , if ˆ c is chosen to be a large enough constant.Lastly, it remains to verify that Vol G ( A ) ≥ · Vol( G ). Recall that | ˆ V ( G i +1 ) | ≥ | V ( ˆ G ) | / ≥ G ) /
3. Therefore, if we denote by U = V ( G ′′ ) = V ( G ) \ ˜ S , then Vol G ( U ) ≥ G ) /
3. Re-call that A = U \ ˜ B , and, from Equation (2), Vol G ′′ ( ˜ B ) ≤ O (cid:16) k · (log m ) O ( r ) /φ (cid:17) ≤ O (cid:16) z · (log m ) O ( r ) /φ (cid:17) .Moreover, Vol G ( ˜ B ) ≤ Vol G ′′ ( ˜ B ) + E G ( ˜ S, ˜ B ) ≤ Vol G ′′ ( ˜ B ) + E G ( U, ˜ S ). From Equation (3), we getthat: Vol G ( ˜ B ) ≤ O (cid:16) z · (log m ) O ( r ) /φ (cid:17) + O ( µφ Vol( G ) / ˆ c ) . Since z = φm ˆ c (log m ) ˆ cr , by letting ˆ c be a large enough constant, we can ensure that Vol G ( ˜ B ) ≤ Vol( G ) /
12. We then get that Vol G ( A ) ≥ | ˆ V ( G i +1 ) | − Vol G ( ˜ B ) ≥ G ) / − Vol( G ) / ≥ G ) / G from graph G is O ( m ). Recall that, if an iteration terminates with a cut, then we deletefrom ˆ G ′ a set of at least Ω( z ) vertices. Therefore, the total number of iterations is bounded by O ( | V ( ˆ G ) | /z ) = O ( m/z ) = O (cid:16) (log m ) O ( r ) /φ (cid:17) . The running time of each iteration is:39 O (cid:16) m O (1 /r ) · (log m ) O ( r ) + m/ψ (cid:17) = ˜ O (cid:16) m O (1 /r ) · (log m ) O ( r ) + m/φ (cid:17) . At the end of each iteration, we employ Lemma 5.4 to turn the resulting cut into a canonical one,in time O ( m ). Therefore, the total running time of the iterations is ˜ O (cid:16) m O (1 /r ) · (log m ) O ( r ) /φ (cid:17) .Lastly, if Case 2 happens, we employ the algorithm from Theorem 2.6, whose running time,as discussed above, is ˜ O (cid:16) m (log m ) O ( r ) /φ (cid:17) . Altogether, the running time of the algorithm is˜ O (cid:16) m O (1 /r ) · (log m ) O ( r ) /φ (cid:17) . BalCutPrune
In this section, we provide applications of the algorithm for
BalCutPrune from Corollary 5.2.Some of the results are summarized in Tables 1 and 2. We use the b O ( · ) notation to hide sup-polynomial lower order terms. Formally b O ( f ( n )) = O ( f ( n ) o (1) ); equivalently, for any constant θ >
0, we have b O ( f ( n )) ≤ O ( f ( n ) θ ). This notation can be viewed as a direct generalizationof the ˜ O ( · ) notation for hiding logarithmic factors, and behaves in a similar manner. An ( ǫ, φ ) -expander decomposition of a graph G = ( V, E ) is a partition P = { V , . . . , V k } of theset V of vertices, such that for all 1 ≤ i ≤ k , the conductance of graph G [ V i ] is at least φ , and P ki − δ G ( V i ) ≤ ǫ Vol( G ). This decomposition was introduced in [KVV04, GR99] and has beenused as a key tool in many applications, including the ones mentioned in this paper.Spielman and Teng [ST04] provided the first near-linear time algorithm, whose running time is˜ O ( m/ poly( ǫ )), for computing a weak variant of the ( ǫ, ǫ / poly(log n ))-expander decomposition,where, instead of ensuring that each resulting graph G [ V i ] has high conductance, the guaranteeis that for each such set V i there is some larger set W i of vertices, with V i ⊆ W i , such thatΦ( G [ W i ]) ≥ ǫ / poly(log n ). This caveat was first removed in [NS17], who showed an algorithmfor computing an ( ǫ, ǫ/n o (1) )-expander decomposition in time O ( m o (1) ) (we note that [Wul17]provided similar results with somewhat weaker parameters). More recently, [SW19] providedan algorithm for computing ( ǫ, ǫ/ poly(log n ))-expander decomposition in time ˜ O ( m/ǫ ). Unfor-tunately, all algorithms mentioned above are randomized.The only previous subquadratic-time deterministic algorithm for computing an expander de-compositions is implicit in [GLN + ǫ, ǫ/n o (1) )-expander decomposition intime O ( m . o (1) ). We provide the first deterministic algorithm for computing expander decom-position in almost-linear time: Corollary 6.1.
There is a deterministic algorithm that, given a graph G = ( V, E ) with m edges,and parameters ǫ ∈ (0 , and ≤ r ≤ O (log m ) , computes a ( ǫ, φ ) -expander decomposition of G with φ = Ω( ǫ/ (log m ) O ( r ) ) , in time O (cid:16) m O (1 /r ) · (log m ) O ( r ) /ǫ (cid:17) .Proof. We maintain a collection H of disjoint sub-graphs of G that we call clusters , which ispartitioned into two subsets, set H A of active clusters , and set H I of inactive clusters . Weensure that for each inactive cluster H ∈ H I , Φ( H ) ≥ φ . We also maintain a set E ′ of “deleted”40dges, that are not contained in any cluster in H . At the beginning of the algorithm, we let H = H A = { G } , H I = ∅ , and E ′ = ∅ . The algorithm proceeds as long H A = ∅ , and consists ofiterations. For convenience, we denote α = (log m ) r , and we set φ = ǫ/ ( cα · log m ), for somelarge enough constant c , so that φ = Ω( ǫ/ (log m ) O ( r ) ) holds.In every iteration, we apply the algorithm from Corollary 5.2 to every graph H ∈ H A , withthe same parameters α , r , and φ . Consider the cut ( A, B ) in H that the algorithm returns,with | E H ( A, B ) | ≤ αφ · Vol( H ) ≤ ǫ · Vol( H ) c log m . We add the edges of E H ( A, B ) to set E ′ . IfVol H ( A ) , Vol H ( B ) ≥ Vol( H ) /
3, then we replace H with H [ A ] and H [ B ] in H and in H A .Otherwise, we are guaranteed that Vol H ( A ) ≥ Vol( H ) /
2, and graph H [ A ] has conductance atleast φ . Then we remove H from H and H A , add H [ A ] to H and H I , and add H [ B ] to H and H A .When the algorithm terminates, H A = ∅ , and so every graph in H has conductance at least φ . Notice that in every iteration, the maximum volume of a graph in H A must decrease by aconstant factor. Therefore, the number of iterations is bounded by O (log m ). It is easy to verifythat the number of edges added to set E ′ in every iteration is at most ǫ · Vol( G ) c log m . Therefore, byletting c be a large enough constant, we can ensure that | E ′ | ≤ ǫ Vol( G ). The output of thealgorithm is the partition P = { V ( H ) | H ∈ H} of V . From the above discussion, we obtain avalid ( ǫ, φ )-expander decomposition, for φ = Ω (cid:16) ǫ/ (log m ) O ( r ) (cid:17) .It remains to analyze the running time of the algorithm. The running time of a single iteration isbounded by O (cid:16) m O (1 /r ) · (log m ) O ( r ) /φ (cid:17) = O (cid:16) m O (1 /r ) · (log m ) O ( r ) /ǫ (cid:17) . Since the totalnumber of iterations is bounded by O (log m ), we get that the total running time of the algorithmis O (cid:16) m O (1 /r ) · (log m ) O ( r ) /ǫ (cid:17) .We provide another algorithm for the expander decomposition, whose running time no longerdepends on ǫ in Section 7 (see Corollary 7.7). In this section we provide a deterministic algorithm for dynamic Minimum Spanning Forest(
MSF ) with n o (1) worst-case update time. Corollary 6.2.
There is a deterministic algorithm that, given an n -vertex graph G undergoingedge insertions and deletions, maintains a minimum spanning forest of G with n o (1) worst-caseupdate time. By implementing the link-cut tree data structure [ST83] on top of the minimum spanning forest,this algorithm immediately implies a deterministic algorithm for
Dynamic Connectivity with thesame update time and O (log n ) query time for answering connectivity queries between pairs ofvertices, proving Theorem 1.5. Thus, we resolve the longstanding open problem of improving the O ( √ n ) worst-case update time from the classical algorithm by Frederickson [Fre85, EGIN97].The previous best deterministic algorithm for Dynamic Connectivity , due to [KKPT16], hasworst-case update time O ( p n (log log n ) / log n ). Below, we prove Corollary 6.2.41 eduction to Expander Decomposition. From now, we write NSW to refer to [NSW17].The algorithm by NSW can be viewed as a reduction to expander decomposition as follows. Forany γ >
1, suppose that, given an n -vertex graph G with maximum degree 3, we can computean ( ǫ, φ )-expander decomposition where ǫ = 1 /γ and φ = 1 /γ in n poly( γ ) time. Then, NSWshow that there is an algorithm for maintaining a minimum spanning forest on a graph with atmost n vertices with worst-case update time t u ( n ) = ˜ O ( π ( γ )poly( γ )) + O (log n ) · t u ( O ( n/γ )) (4)where π ( γ ) is a function such that π ( γ ) = n o (1) as long as γ = n o (1) . This follows from the proofof Lemma 9.28 of NSW. Solving this recursion, we obtain t u ( n ) = O ( π ( γ )poly( γ )) · O (log n ) O (log γ n ) . (5)This reduction is deterministic (after a slight modification which we will describe later). Observethat, for any γ where γ = ω (polylog( n )) and γ = n o (1) , we have t u ( n ) = n o (1) .In NSW (Lemma 8.7), they show a randomized algorithm for computing a (1 /γ , /γ )-expanderdecomposition of a bounded degree graph with running time n poly( γ ) where γ = n O (log log n/ √ log n ) .The above reduction then implies a randomized dynamic minimum spanning forest algorithmwith n o (1) update time. We can immediately derandomize this algorithm using Corollary 6.1 asfollows: Lemma 6.3 (Deterministic Version of Lemma 8.7 of [NSW17] for Bounded-degree Graphs) . There is a deterministic algorithm A that, given an n -vertex graph G = ( V, E ) with maximumdegree , and a parameter α > , computes an ( αγ, α ) -expander decomposition of G in O ( nγ/α ) time where γ = 2 O (log n log log n ) / .Proof. Let r = log / n and ǫ = c α (log n ) O ( r ) for a large enough constant c . The algorithmwith parameter ǫ from Corollary 6.1 returns a partition Q = { V , . . . , V k } of V , such that forall 1 ≤ i ≤ k , each Φ( G [ V i ]) ≥ Ω( ǫ/ (log n ) O ( r ) ) ≥ Ω( c α ) ≥ α , if we assume that c is a largeenough constant. Moreover, the number of edges whose endpoints lie in different sets of the parti-tion is at most O ( ǫn ) = O ( αn (log n ) O (log / n ) ) ≤ αγn , since γ = 2 O (log n log log n ) / . The runningtime of Corollary 6.1 is O ( n O (1 /r ) (log n ) O ( r ) /ǫ ) = O ( nγ/α ), since γ = 2 O (log n log log n ) / .By plugging the above algorithm with α = 1 /γ into the reduction of NSW, we obtain the deterministic dynamic minimum spanning forest algorithm with n o (1) update time. Overview of the Reduction.
In this rest of this section, we explain how the above reductionby NSW works in high-level and, in particular, why Equation (4) holds. We also describe theslight modification of the reduction so that there is no randomized component in it.Let G be a weighted n -vertex graph undergoing edge insertions and deletions and let MSF ( G )denote the minimum spanning forest of G . There are two high-level steps. The first step is tomaintain a sketch graph H satisfying two properties: (1) MSF ( H ) = MSF ( G ) and (2) H is asubgraph of G containing only n + k edges where k = o ( n ). The second step is to maintain theminimum spanning forest MSF ( H ) of H . 42elow, we will sketch how NSW implement this strategy in the special case when G is initiallyan expander and we sketch how they generalize the algorithm.We emphasize that the problem is non-trivial even when we have a promise that the underlyinggraph is always an expander throughout the updates. Indeed, if our goal is only maintainingconnectivity of an expander, then the problem becomes trivial as an expander must be connected.However, suppose we want to maintain a spanning forest (not necessarily minimum) and a treeedge is deleted. Then, there is no known simple deterministic method to find a replacementedge and update the forest accordingly, even if the graph is an expander. Moreover, even if we want to maintain only connectivity of a general graph, the NSW algorithmstill needs to a subroutine for maintaining a spanning forest on expanders. So, to explain theNSW algorithm, we need to explain how to maintain a spanning forest in an expander. As thealgorithm for minimum spanning forest is not much more complicated, we give the overview formaintaining minimum spanning forests below.Using a standard reduction, we will assume that G has maximum degree 3 and MSF ( G ) isunique. Special case: Using Expander Pruing.
Suppose that G = ( V, E ) is a (1 /γ )-expanderfor some γ = n o (1) . At the preprocessing step, NSW simply set the initial sketch graph H = MSF ( G ). Then, given a sequence of edge updates to G , they employ the dynamic expanderpruning algorithm (Theorem 5.1 of NSW) that maintains a pruned set P ⊂ V such that, • P = ∅ initially and vertices only join P and are never removed, • for some P ′ ⊆ P , G [ V − P ′ ] is connected , and • P can be updated in π = π ( γ ) worst-case time, such that π ( γ ) = n o (1) as long as γ = n o (1) .In particular, | P | ≤ i · π after the i -th update.At any time, let I be the set of inserted edges and D be the set of deleted edges. They maintain H = H ∪ E G ( P, V ) ∪ I \ D . That is, H contains all edges from the original minimum spanningforest H , all edges incident to the pruned set P , and all newly inserted edges, and we excludeall the deleted edges from H . Because G [ V − P ′ ] is connected, it is not hard to see that H contains all edges of the current MSF ( G ) and so MSF ( H ) = MSF ( G ). Therefore, when G isinitially an expander, the task of maintaining the sketch graph H is only amount to maintainingthe pruned set P . Moreover, if the length of update sequence is at most T = O ( n/ ( γπ )), thenwe have that the number of edges in H is n + k where k = T π = O ( n/γ ) which is sublinear in n as desired.Next, the goal is to maintain MSF ( H ). As H has at most n + k edges, NSW observe that the contraction technique by [HdLT01] allows them to recursively reduce the problem to graphs with O ( k ) vertices. In slightly more detail, observe that given an edge update in the original graph G , this update in G corresponds at most O ( π ) edge insertions in H and one edge deletion to H .This is because P only grows by at most π vertices per step. On the contrary, if we only need a randomized algorithm against an adaptive adversary , there is a simplealgorithm based on random sampling as shown in [NS17]. G [ V − P ′ ] actually has conductance at least 1 /n o (1) but NSW do not exploit that. H corresponds to O (log n ) recursive callsto smaller graphs of size O ( k ). That is, the MSF ( H ) can be maintained with update time˜ O ( π ) + O (log n ) · t u ( O ( k )) per one update to the original graph G . The detail can be found inSection 7 of NSW.To summarize, suppose that G is a (1 /γ )-expander, each of the T = O ( n/ ( γπ )) updates canbe handled in time at most ˜ O ( π ) + O (log n ) · t u ( O ( n/γ )) . If this was true even for an arbitrary n -vertex graph G , then we would have a recursive algorithmwith worst-case update time t u ( n ) = ˜ O ( t pre /T ) + ˜ O ( π ) + O (log n ) · t u ( O ( n/γ ))where t pre denotes the preprocessing time. The first term above follows from the fact that weneed to restart the data structure after every T updates. Note that this does not make theupdate time amortized because the time required for restarting can be distributed using thestandard building-in-the-background technique. If the preprocessing time is t pre = O ( n poly( γ )),then the recursion implies that t u ( n ) = O ( π poly( γ )) · O (log n ) O (log γ n ) . which is the same as Equation (5) as we desired. General case: Using
MSF
Decomposition.
The analysis above overly simplifies the NSWalgorithm because G might not be an expander.The key tool that allows NSW to work with general graphs is called the MSF decomposition(Theorem 8.3 of NSW). The
MSF decomposition is an intricate hierarchical decomposition ofa graph tailored for the dynamic minimum spanning forest problem. It is a combination ofthree kinds of graph partitioning including (1) the expander decomposition, (2) a partitioning ofedges into groups sorted by the edge weights, and (3) the M -clustering (introduced in [Wul17])which partitions MSF ( G ) into small subtrees. See Section 8 of NSW for detail. For us the onlyimportant point is that, the MSF decomposition calls the (1 /γ , /γ )-expander decomposition ofbounded degree graphs as a subroutine, and if the expander decomposition runs in O ( n poly( γ ))time, then so does the MSF decomposition. The strategy of the algorithm remains the same: to first maintain a sketch graph H which is avery sparse graph where MSF ( H ) = MSF ( G ), and then maintain the minimum spanning forest MSF ( H ) of H . Given a general weighted graph G , the NSW algorithm proceeds as follows.First, they preprocess the graph G by (mainly) applying the MSF decomposition using n poly( γ )time. This decomposition will define the initial sketch graph H . The precise definition of thesketch graph H is complicated and is omitted here. The important point for us is that the initialnumber of edges in H directly exploits the guarantee of (1 /γ , /γ )-expander decomposition.More precisely, they have | E ( H ) | ≤ n + k where k = O ( n/γ ). In Theorem 8.3, there are actually other parameters d, α, s low , s high . But the NSW algorithm sets α =1 /γ , d = γ, s low = γ, s high = n/γ (see NSW on Page 30 below Theorem 8.3). That is, all properties of Theorem8.3 are dictated by the parameter γ . H will grow in a similar way as described in thecase when the graph is a expander. That is, given a single edge update in G , this correspondsto at most O ( π poly( γ )) edge insertions to H and O (1) edge deletions in H . This bounds followsby a careful definition of H and a strong guarantee of the MSF decomposition. (See Section9.1 of NSW for the definition of H and Theorem 8.3 of NSW for precise guarantee of the MSF decomposition.) For each update in G , the time for updating H is˜ O ( π poly( γ )) + O (1) · t u ( O ( n/γ )) . Note that, if the update sequence has length T = O ( n/ ( π poly( γ ))), then we can guarantee that H always has at most n + O ( n/γ ) edges.Given that we can maintain the sketch graph H , the NSW algorithm maintains MSF ( H ) in thesame way we have described when we know that the graph is an expander. That is, NSW applythe contraction technique by [HdLT01] to recursively solve the problem in smaller graphs of size O ( n/γ ). In the end, the update time can be written as t u ( n ) = ˜ O ( t pre /T ) + ˜ O ( π poly( γ )) + O (log n ) · t u ( O ( n/γ ))where t pre = n poly( γ ) and T = O ( n/ ( π poly( γ ))). This implies Equation (4) as we desired. It re-mains to point out the randomized components in this algorithms and show how to derandomizethem. Derandomization.
The NSW algorithm has only two randomized components. The first ran-domized component is the
MSF decomposition algorithm from Theorem 8.3 in Section 8 of NSW.The only source of randomization in Theorem 8.3 comes from the algorithm for computing theexpander decomposition (Lemma 8.7 in NSW). By replacing this algorithm with Corollary 6.1,we obtain a deterministic implementation of Theorem 8.3.The second randomized component is Theorem 6.1 from Section 6 in NSW, which is an exten-sion of the dynamic expander decomposition from Theorem 5.1 in NSW. The algorithm fromTheorem 5.1 is deterministic but needs to assume that its input graph is a high-conductancegraph. Unfortunately, the algorithm for computing the
MSF decomposition from Theorem 8.3is randomized, and only ensures that the resulting sub-graphs have high conductance with highprobability. Theorem 6.1 is an extension of Theorem 5.1 that allows it to work even if the inputgraph has low conductance. Since the new deterministic algorithm from Corollary 6.1 guaranteesthat the sub-graphs obtained in the
MSF decomposition have high conductance, we no longerneed to use Theorem 6.1, and the algorithm from Theorem 5.1, which is deterministic, is nowsufficient.To summarize, we only need to modify the NSW algorithm as follows: (1) bypassing Theorem 6.1of NSW and directly applying Theorem 5.1 of NSW for expander pruning, and (2) replacing therandomized expander decomposition algorithm from Lemma 8.7 of NSW by the deterministicversion described in Lemma 6.3.
Our deterministic algorithm for computing expander decompositions from Corollary 6.1 immedi-ately implies a deterministic algorithm for the original application of expander decompositions:45onstructing spectral sparsifiers [ST11]. Suppose we are given a undirected weighted n -vertexgraph G = ( V, E, w ) (possibly with self-loops). The Laplacian L G of G is a matrix of size n × n whose entries are defined as follows: L G ( u, v ) = u = v, ( u, v ) E − w uv u = v, ( u, v ) ∈ E P ( u,u ′ ) ∈ E : u = u ′ w uu ′ u = v. We say that a graph H is an α -approximate spectral sparsifier for G iff for all x ∈ R n , α x ⊤ L G x ≤ x ⊤ L H x ≤ α · x ⊤ L G x holds.All previous deterministic algorithms for graph sparsification, including those computing cutsparsifiers, exploit explicit potential function-based approach of Batson, Spielman and Srivas-tava [BSS12]. All previous algorithms that achieve faster running time either perform ran-dom sampling [SS11], or use random projections, in order to estimate the importances ofedges [ALO15]. We provide the first deterministic, almost-linear-time algorithm for comput-ing a spectral sparsifier of a weighted graph. We emphasize that although all algorithms fromprevious sections are designed for unweighted graphs, the fact that spectral sparsifiers are “de-composable” allows us to easily reduce the problem on weighted graphs to the one on unweightedgraphs. Corollary 6.4.
There is a deterministic algorithm, that we call
SpectralSparsify that, givenan undirected n -node m -edge graph G = ( V, E, w ) with integral edge weights w bounded by U ,and a parameter ≤ r ≤ O (log m ) , computes a (log m ) O ( r ) -approximate spectral sparsifier H for G , with | E ( H ) | ≤ O ( n log n log U ) , in time O (cid:16) m O (1 /r ) · (log m ) O ( r ) log U (cid:17) .Proof. We first assume that G is unweighted. We compute a (1 / , φ )-expander decomposition P = { V , V , . . . , V k } of G , for φ = 1 / (log m ) O ( r ) , using the algorithm from Corollary 6.1. Let ˆ E denote the set of all edges e ∈ E ( G ), whose endpoints lie in different sets in the partition P . Ifˆ E = ∅ , then we continue the expander decomposition recursively on G [ ˆ E ]. Notice that the depthof the recursion is bounded by O (log m ). When this process terminates, we obtain a collection { G , . . . , G z } of sub-graphs of G , that are disjoint in their edges, such that S zj =1 E ( G j ) = E ( G ).Moreover, we are guaranteed that for all 1 ≤ j ≤ z , graph G j has conductance are at least φ = 1 / (log m ) O ( r ) . It is now enough to compute a spectral sparsifier for each of the resultinggraphs G , . . . , G z separately.We can now assume that we are given a graph G whose conductance is at least φ = 1 / (log m ) O ( r ) ,and our goal is to construct a spectral sparsifier for G . In order to do so, we will first approximate G by a “product demand graph” D , that was defined in [KLP + + ConstructExpander fromTheorem 2.4, in order to sparsify D . Definition 6.5 (Definition G.13, [KLP + . Given a vector d ∈ ( R > ) n , its corresponding product demand graph H ( d ), is a complete weighted graph on n vertices with self-loops, wherefor every pair i, j of vertices, the weight w ij = d i d j .Given an n -node edge-weighted graph G = ( V, E, w ), let deg G ∈ Z n be the vector of weighteddegrees of every vertex (that includes self-loops), so for all j ∈ V , the j th entry of deg G is46eg G ( j ) = P i ∈ V w i,j . Given an input graph G , we construct a product demand graph D = G ) H (deg G ). It is immediate to verify that the weighted degree vectors of D and G are equal,that is, deg D = deg G .Next, we need to extend the notion of conductance to weighted graphs with self loops. Considera weighted graph H = ( V ′ , E ′ , w ′ ) (that may have self-loops), and let S ⊆ V ′ be a cut in H .We then let δ H ( S ) = P ( u,v ) ∈ E ′ : u ∈ S,v S w ′ u,v , and we let Vol H ( S ) = P v ∈ S P u ∈ V ′ w ′ u,v . A weightedconductance of the cut S in H is then: δ H ( S )min { Vol H ( S ) , Vol H ( S ) } , and the conductance of H is theminimum conductance of any cut in H . We need the following observation: Observation 6.6.
The weighted conductance of graph D is at least / .Proof. Consider any cut S in D . Observe that, from our construction, δ D ( S ) = Vol G ( S ) · Vol G ( S ) / Vol( G ). It is also easy to see that Vol H ( S ) = Vol G ( S ). Assume without loss ofgenerality that Vol D ( S ) ≤ Vol D ( S ), so Vol D ( S ) ≥ Vol( G ) /
2. Then the conductance of the cut S is: δ D ( S )Vol D ( S ) = Vol G ( S ) · Vol G ( S )Vol( G ) · Vol G ( S ) ≥ . In the following lemma, we show that D is a spectral sparsifier for G . Lemma 6.7.
Let D and G be two undirected weighted n -vertex graphs with V ( D ) = V ( G ) , suchthat deg D = deg G . Assume further that Φ( D ) , Φ( G ) ≥ φ for some conductance threshold φ .Then for any real vector x ∈ R n : φ x ⊤ L G x ≤ x ⊤ L D x ≤ φ x ⊤ L G x .Proof. The normalized Laplacian b L H of a weighted graph H is defined as W − / H L H W − / H ,where L H is the Laplacian of H and W H is a diagonal weighted-degree matrix, where for everyvertex v of H , ( W H ) vv = deg H ( v ).Let b L D and b L G be normalized Laplacians of D and G , respectively. It is well-known thateigenvalues of normalized Laplacians are between 0 and 2. Also, observe that, for any graph H , L H ~ b L G (deg G ) / = b L D (deg G ) / = 0. That is, (deg G ) / is in the kernel ofboth b L G and b L D .Let λ be the second smallest eigenvalue of b L H . Then for any vector x ′ ⊥ (deg G ) , we have: λ x ′⊤ b L D x ′ ≤ λ k x ′ k ≤ x ′⊤ b L G x ′ , since the largest eigenvalue of b L D is at most 2. This implies that, for every vector x ∈ R n , x ⊤ b L G x ≥ λ x ⊤ b L D x holds. Indeed, we can write x = x + c (deg G ) x ⊥ (deg G ) and c is a scalar. This gives: x ⊤ b L G x = (cid:16) x + c (deg G ) (cid:17) ⊤ b L G (cid:16) x + c (deg G ) (cid:17) = x ⊤ b L G x ≥ λ · x ⊤ b L D x = λ · (cid:16) x + c (deg G ) (cid:17) ⊤ b L D (cid:16) x + c (deg G ) (cid:17) = λ · x ⊤ b L D x where the last equality uses the fact that deg G = deg D . By Cheeger’s inequality [Alo86], wehave λ ≥ Φ( G ) / ≥ φ /
2. Therefore, for any vector x ∈ R n : x ⊤ b L G x ≥ φ x ⊤ b L D x (6)We can now conclude that, for any vector x ∈ R n : x ⊤ L G x = x ⊤ W / G b L G W / G x ≥ φ x ⊤ W / G b L D W / G x = φ x ⊤ W / G W − / D L D W − / D W / G x = φ x ⊤ L D x where the first inequality follows by applying Equation (6) to vector x ′ = W / G x , and the lastequality follows from the fact that deg G = deg D . The proof that x ⊤ L D x ≥ φ x ⊤ L H x is similar.Using Lemma 6.7 with φ = 1 / (log m ) O ( r ) implies that D is a (cid:16) (log m ) O ( r ) (cid:17) = (log m ) O ( r ) -approximate spectral sparsifier of H . Finally, a spectral sparsifier for graph D can be constructedin nearly linear time using the following lemma. Lemma 6.8 (Lemma G.15, [KLP + . There exists a deterministic algorithm that, given anydemand vector d ∈ R n , computes, in time O ( nǫ − ) , a graph K with O ( nǫ − ) edges such that e − ǫ K is an e ǫ -approximate spectral sparsifier of H ( d ) . By letting ǫ = 2 and d = deg D in Lemma 6.8, we obtain an 100-approximate spectral sparsifierfor graph D (by scaling K ), which is in turn a (log m ) O ( r ) -approximate spectral sparsifier forgraph G . By combining the spectral sparsifiers that we have computed for all sub-graphs of theoriginal input graph G , we obtain an (log m ) O ( r ) -approximate spectral sparsifier of the originalgraph G . The total number of edges in the sparsifier is O ( n log n ), as every level of the recursioncontributes O ( n ) edges. 48e now analyze the running time of the algorithm. Since the depth of the recursion is O (log m ),running Corollary 6.1 takes O (cid:16) m O (1 /r ) · (log m ) O ( r ) (cid:17) time in total. Sparsifying the resultingexpanders takes O ( m polylog( m )) time. Therefore, the overall running time is bounded by O (cid:16) m O (1 /r ) · (log m ) O ( r ) (cid:17) .For the general (weighted) case, it suffices to decompose the graph by the binary representationsof the edge weights and sum the results up: For every edge e ∈ E ( G ), let b e be the binaryrepresentation of the weight w e . For all 1 ≤ i ≤ ⌈ log(max e w e ) ⌉ , we construct an unweightedgraph G ( i ) , whose vertex set is V , and edge set contains every edge e ∈ E ( G ), such that the i thbit of b e is 1. Since w e ≤ U for every e ∈ E ( G ), there are at most ⌈ log U ⌉ such G ( i ) s. By thealgorithm for the unweighted case, we compute (log m ) O ( r ) -approximate spectral sparsifiers foreach G ( i ) . The desired (log m ) O ( r ) -approximate spectral sparsifier for G is P ⌈ log(max e w e ) ⌉ i =1 i G ( i ) .This sparsifier contains P ⌈ log(max e w e ) ⌉ i =1 | E ( G ( i ) ) | = O ( n log n log U ) edges. The total running timeis O (cid:16) m O (1 /r ) · (log m ) O ( r ) log U (cid:17) . The fastest previous deterministic Laplacian solver, due to Spielman and Teng [ST03], hasrunning time ˜ O (cid:16) m . log ǫ (cid:17) . All faster solvers with near-linear running time are based onrandomized spectral sparsifiers (e.g. [ST14]) or are inherently randomized [KS16]. By applyingthe deterministic algorithm for computing spectral sparsifiers from Corollary 6.4, we immediatelyobtain deterministic Laplacian solvers with almost linear running time.Formally stating such results requires the definition of errors, which are based on matrix norms.For any matrix A , an A -norm of a vector x is defined by k x k A = √ x ⊤ A x . Let A † denote theMoore-Penrose pseudoinverse of A , which is the matrix with the same nullspace as A that actsas the inverse of A on its image. Corollary 6.9.
There is a deterministic algorithm that, given a Laplacian L size n × n with m non-zeroes and a vector b ∈ R n , computes a vector x such that (cid:13)(cid:13)(cid:13) x − L † b (cid:13)(cid:13)(cid:13) L ≤ ǫ || L † b || L in time b O (cid:16) m log ǫ (cid:17) . This result follows because spectral sparsifiers are the only randomized components in theSpielman-Teng Laplacians solvers [ST14]. Although Spielman and Teng employ (1+ ǫ )-approximatespectral sparsifiers in their solvers, by paying n o (1) factor in the running time, one can show thatexactly the same approach works even if we use n o (1) -approximate spectral sparsifiers fromCorollary 6.4.There are several graph algorithms [Mad10b, Mad16, CMSV17] based on interior point methodthat need to iteratively solve Laplacian systems several times. In those algorithms, solving Lapla-cians is the only randomized subroutine. Therefore, the b O (cid:16) m / log W (cid:17) bound of interior pointmethods for graph structured matrices by Daitch and Spielman [DS08] becomes deterministic.This immediately implies algorithms for: • maximum flow in directed graphs with m edges and edge capacities up to W ,49 minimum-cost, and loss generalized flows in directed graphs with m edges and edge ca-pacities in [0 , W ] and edge costs in [ − W, W ],that run in deterministic b O (cid:16) m / log W (cid:17) time. (See [DS08] for the discussion about the historyof these problems.) Furthermore, by derandomizing the interior-point-methods-based resultsfrom [Mad13, Mad16, CMSV17], the following problems in directed m -edge graphs with edgecosts/weights in the range [ − W, W ] can also be solved in deterministic b O (cid:16) m / log W (cid:17) time: • unit-capacity maximum flow and maximum bipartite matching, • single-source shortest path (with negative weight), • minimum-cost bipartite perfect matching, • minimum-cost bipartite perfect b -matching, and • minimum-cost unit-capacity maximum flow.A discussion about the history of these problems can be found in [CMSV17]. In this subsection we discuss applications of our results to approximate maximum s - t flow inundirected edge-capacitated graphs. Given an edge-capacitiated graph G = ( V, E ), and a targetflow value b , together with an accuracy parameter 0 < ǫ <
1, the goal is to either compute an s - t flow of value at least (1 − ǫ ) b , or to certify that the maximum s - t flow value is less than b ,by exhibiting an s - t cut of capacity less than b . We note that the problem can equivalently bedefined using a demand function b : V → R with P v ∈ V b v = 0, by setting b s = − b t = 1, and,for all v ∈ V \ { s, t } , b v = 0. In general, given an arbitrary demand function b : V → R with P v ∈ V b v = 0, we say that a flow f satisfies the demand b iff, for every vertex v ∈ V , the excessflow at v , which is the total amount of flow entering v minus the total amount of flow leaving v ,is precisely b v .The maximum s - t flow problem is among the most basic and extensively studied problems.There are several near-linear time randomized algorithms for computing (1 + ǫ )-approximatemaximum flows [She13, KLOS14, Pen16]; the fastest current randomized algorithm, due toSherman [She17], has running time ˜ O ( m/ǫ ). Our results imply a deterministic algorithm forapproximate maximum flow in undirected edge-capacitated graphs, with almost-linear runningtime. Corollary 6.10.
There is a deterministic algorithm that, given an m -edge connected graph G with capacities c e ≥ on edges e ∈ E , such that max e c e min e c e ≤ O (poly( m )) , a demand function b ∈ R V with P v ∈ V b v = 0 , and an integer ≤ r ≤ O (log m ) , and an accuracy parameter < ǫ ≤ , computes, in time T MaxFlow ( m, ǫ ) = O ( m O (1 /r ) (log m ) O ( r ) ǫ − ) , either: • (Flow): a flow satisfying the demand b with | f ( e ) | ≤ (1 + ǫ ) c e for every edge e ; or • (Cut): a cut S such that P e ∈ E ( S,S ) c e < | P v ∈ S b v | . n particular, choosing r ← (log m ) / (log log m ) − / gives a total running time of m · exp (cid:16) O (cid:16) (log m log log m ) / (cid:17)(cid:17) ǫ − < O (cid:16) m o (1) ǫ − (cid:17) . Our proof of Corollary 6.10 closely follows the algorithm of [She13], and proceeds by constructinga congestion approximator. Unlike the algorithm in [She13], our algorithm for computing thecongestion approximator is deterministic, and is obtained by replacing a randomized procedurein [She13] for constructing a cut sparsifier with the deterministic algorithm from Corollary 6.4.We then use the reduction from (1 + ǫ )-approximate maximum flow to congestion approximatorsby [She13].Suppose we are given a graph G = ( V, E ), where E = { e , . . . , e m } , with capacities (or weights) c e > e ∈ E . Let C be the diagonal ( m × m ) matrix, such that for all 1 ≤ i ≤ m ,the entry ( i, i ) of the matrix is c e i . Assume now that we are given a demand vector b : V → R ,with P v ∈ V b v = 0. Let f be any flow that satisfies the demand b . The congestion η of thisflow is the maximum, over all edges e ∈ E , of f ( e ) / c e . Equivalently, η = (cid:13)(cid:13) C − f (cid:13)(cid:13) ∞ . By scalingflow f by factor η , we obtain a valid flow routing demand b /η . Therefore, the maximum s - t flow problem is equivalent to the following problem: given a graph G = ( V, E ) with capacities c e on edges e ∈ E and a demand vector b : V → R , with P v ∈ V b v = 0, compute a flow f satisfying the demand b , while minimizing the congestion η = (cid:13)(cid:13) C − f (cid:13)(cid:13) ∞ among all such flows.For convenience, we define an incidence matrix B associated with graph G . We direct the edgesof graph G arbitrarily. Matrix B is an ( m × n ) matrix, whose rows are indexed by edges andcolumns by vertices of G . Entry ( e i , v j ) is − e i is an edge that leaves v j , it is 1 if e i is anedge that enters v j , and it is 0 otherwise. Notice that, given any flow f , the j th entry of B · f is the excess flow on vertex v j : the total amount of flow entering v j minus the total amount offlow leaving v j . If flow f satisfies a demand vector b , then B · f = b must hold. Next, we recallthe definition of congestion approximators from [She13]. Definition 6.11.
Let G be a graph with n vertices and m edges, let C be the diagonal matrixcontaining of edge capacities and let B be the n × m incidence matrix. An α -congestion approx-imator for G is a matrix R that contains n columns and an arbitrary number of rows, such thatfor any demand vector b , k R b k ∞ ≤ opt G ( b ) ≤ α k R b k ∞ , where opt G ( b ) is the value of the optimal solution for the minimum-congestion flow problemmin (cid:13)(cid:13)(cid:13) C − f (cid:13)(cid:13)(cid:13) ∞ s.t. Bf = b . The main goal of this section is to prove the following lemma, that provides a deterministicconstruction of congestion approximators, and is used to replace its randomized counterpart,Theorem 1.5 in [She13]. We note that we obtain somewhat weaker parameters in the approxi-mation factor and the running time.
Lemma 6.12 (Deterministic version of Theorem 1.5, [She13]) . There is a deterministic algo-rithm, that we call
CongestionApproximator ( G, r ) , that, given as input an n -vertex m -edgegraph G with capacity ratio U and an integer ≤ r ≤ O (log m ) , constructs a (log m ) O ( r ) -congestion approximator R , in time O ( m O (1 /r ) log O ( r ) ( mU )) . Once constructed, we can com-pute a multiplication of R and of R ⊤ by a vector, in time O ( m O (1 /r ) log O ( r ) ( mU )) each. S, S ) in an edge-capacitated graph G = ( V, E, c ), we denote c S = P e ∈ E ( S,S ) c e .Given a demand vector b for G , we also denote b S = | P v ∈ S b v | .Sherman [She13] provides the following method for turning an efficient congestion approximatorinto an algorithm for computing approximate maximum flow. Lemma 6.13 (Theorem 2.1 [She13]) . There is a deterministic algorithm that, given a graph G = ( V, E ) with edge weights c e for e ∈ E and a demand vector b : V → R with P v ∈ V b v = 0 ,together with an access to an α -congestion-approximator R of G , makes ˜ O ( α ǫ − ) iterations, andreturns a flow f and cut S in G , with Bf = b and (cid:13)(cid:13) C − f (cid:13)(cid:13) ∞ ≤ (1 + ǫ ) b S / c S . Each iterationrequires O ( m ) time, plus time needed to multiply a vector by R , and time needed to multiply avector by R ⊤ . Note that combining Lemma 6.12 with Lemma 6.13 immediately yields a proof of Corollary 6.10.Indeed, if the algorithm from Lemma 6.13 outputs a cut S with b S > c S , then we return thecut S as the outcome of our algorithm for Corollary 6.10. Otherwise, b S / c S ≤ f output by Lemma 6.13 satisfies (cid:13)(cid:13) C − f (cid:13)(cid:13) ∞ ≤ ǫ . We then return f asthe outcome of Corollary 6.10. The algorithm from Lemma 6.13 performs ˜ O ( α /ǫ ) iterations,where α = log O ( r ) m . Every iteration takes time O ( m O (1 /r ) log O ( r ) m ), since U = poly( m )by assumption. Therefore, the total running time of the algorithm from Lemma 6.13 is atmost O (cid:16) m O (1 /r ) log O ( r ) m/ǫ (cid:17) . So the total running time of the algorithm is bounded by O (cid:16) m O (1 /r ) log O ( r ) m/ǫ (cid:17) .In order to complete the proof of Corollary 6.10, it is now enough to prove Lemma 6.12. Proof of Lemma 6.12
Instead of constructing the matrix R directly, we (implicitly) construct a graph H with V ( H ) = V ( G ), that can be thought of as a cut sparsifier for G , and then construct a collection C = { ( A i , B i ) } i of cuts in graph H . The corresponding matrix R will then contain a row for eachcut ( A i , B i ), where for each 1 ≤ j ≤ n , the j th entry of the row corresponding to cut ( A i , B i ) is1 / c A i if vertex v j ∈ A i , and 0 otherwise (the values c A i are defined with respect to the sparsifier H ). Note that for any vector b ∈ R n , the value of the i th entry of R · b is either b A i / c A i or − b A i / c A i . Therefore, from the maximum-flow / minimum-cut theorem, we are guaranteed that,if f is a flow satisfying b in graph H , then k R b k ∞ ≤ (cid:13)(cid:13) C − f (cid:13)(cid:13) ∞ (recall that (cid:13)(cid:13) C − f (cid:13)(cid:13) ∞ is thecongestion caused by flow f ). Therefore, our goal is to define the set C of cuts in H such that,on the one hand, |C| is small, and on the other hand, there exists some cut ( A i , B i ) ∈ C , suchthat b A i / c A i ≥ opt H ( b ) /α . The sparsifier H is in fact a convex combination of a collection F ofspanning trees of G , and the cuts in C are the cuts defined by these trees (that is, for each tree T ∈ F , for every edge e of T , we add the cut defined by T \ { e } to C ). As the total number of allsuch cuts is large, the matrix R itself is also large, so we cannot afford to construct it explicitly.Instead, the recursive procedure that we use in order to construct the collection F of trees canalso be employed in order to efficiently compute a multiplication of R and of R ⊤ by a vector.We note that this algorithm is only a slight modification of the algorithm of Sherman [She13].The construction of the cut sparsifier H for G and the corresponding collection F of trees is adirect modification of the j -tree based construction of cut approximators by Madry [Mad10a].52he algorithm of [Mad10a] constructs the cut sparsifier by gradually reducing G to (several)tree-like objects, called j -trees. Each such object consists of a collection of disjoint trees andanother relatively small graph called a core. We can then consider all cuts defined by theedges of the tree, and then recursively sparsify the core after contracting the trees, in order toconsider the cuts that partition the core. In order to accomplish this, the algorithm alternatesbetween reducing the number of vertices and the number of edges in a graph whose cuts weneed to approximate. The former is achieved by routing along adaptively generated low-stretchspanning tree, while the latter uses a randomized algorithm of Benczur and Karger [BK02].Our modifications of the constructions of Madry [Mad10a] and Sherman [She13] consist of threemain components:1. observing that the routing along the trees is entirely deterministic;2. showing that the randomized algorithm of [BK02] for computing cut sparsifiers can bedirectly replaced by its deterministic counterpart from Corollary 6.4 (though with a largerapproximation factor); and3. showing that, instead of sampling the distribution over partial routings, we can recursivelyconstruct congestion approximators for each of them, and analyzing the total cost of therecursion.We start with several definitions that we need. Definition 6.14 (Embeddings of weighted graphs) . Let G , H be two graphs with V ( G ) = V ( H ).Assume also that we are given edge weights w e for all edges e ∈ E ( H ) be w e . An embedding of H into G is a collection P = { P ( e ) | e ∈ E ( H ) } of paths in G , such that for each edge e ∈ E ( H ),path P ( e ) connects the endpoints of e in G . We say that the embedding causes congestion η iff for every edge e ′ ∈ E ( G ): X e ∈ E ( H ): e ′ ∈ P ( e ) w e ≤ η. We also need the following definitions for composition of graphs and cut domination.
Definition 6.15.
Let G = ( V, E, w ) and H = ( V, E ′ , w ′ ) be two edge-weighted graphs definedover the same vertex set. We define their composition K = G + H be an edge-weighted graph K = ( V, F, z ), where F is the disjoint union of the edge sets E and E ′ , and for every edge e ∈ F ,its weight is defined to be z e = w e if e ∈ E and z e = w ′ e if e ∈ E ′ .Given an edge-weighted graph G = ( V, E, w ) and a scalar α , we let αG be the edge-weightedgraph ( V, E, α w ). Definition 6.16.
Suppose we are given two edge-weighted graphs G = ( V, E, w ) and H =( V, E ′ , w ′ ) that are defined over the same vertex s et. We say that G cut-dominates H , anddenote G ≥ c H iff for every partition ( S, S ) of V , P e ∈ E G S,S w e ≥ P e ∈ E H S,S w ′ e .We use the following definition of j -trees of [Mad10a]: Definition 6.17.
A graph H is a j -tree if it is a union of:53 a subgraph H ′ of H (called the core ), induced by a set V H ′ of at most j vertices; and • a forest (that we refer to as peripheral forest ), where each connected component of theforest contains exactly one vertex of V H ′ . For each core vertex v ∈ V H ′ , we let T H ( v )denote the unique tree in the peripheral forest that contains v . When the j -tree H isunambiguous, we may use T ( v ) instead.We use the following theorem, which is a restatement of Theorem 3.6 from [Mad10a]. Lemma 6.18 ([Mad10a]) . There is a deterministic algorithm that, given an edge-weighted graph G = ( V, E, w ) with | E | = m and capacity ratio U = max e ∈ E w e max e ∈ E w e , together with a parameter t ≥ ,computes, in time ˜ O ( tm ) , a distribution { λ i } ti =1 over a collection of t edge-weighted graphs G , . . . , G t , where for each ≤ i ≤ t , G i = ( V, E i , w i ) , and the following hold: • for all ≤ i ≤ t , graph G i is an ( m log O (1) m log Ut ) -tree, whose core contains at most m edges; • for all ≤ i ≤ t , G embeds into G i with congestion ; and • the graph that’s the average of these graphs over the distribution, ˜ G = P i λ i G i can beembedded into G with congestion O (log m (log log m ) O (1) ) .Moreover, the capacity ratio of each G i is at most O ( mU ) . We remark that Madry’s algorithm calls low-stretch spanning trees as a black-box, and is deter-ministic outside of the low-stretch spanning tree algorithm. Madry calls the fastest algorithmat the time of his paper [ABN08] which is randomized, but the more recent algorithm of [AN12]is deterministic and even produces better bounds, so we can simply use that instead and keepit deterministic.Notice that the distribution over the j -trees from the above theorem essentially provides aconstruction of a cut sparsifier ˜ G = P i λ i G i for graph G . Next, we show that it is sufficient toconsider two types of cuts in this sparsifier: cuts defined by the edges of the trees in the j -trees G , . . . , G ˜ t , and cuts that are obtained by first contracting each tree of a graph G i , and thenpartitioning its core. This observation was also shown by Madry in Theorem 4.1 and Lemma 6.1of [Mad10a]. In order to do so, we need to define “truncated” versions of the demand vector,which we do next. Definition 6.19. If H = ( V H , E H , c H ) is a j -tree with core V H ′ and b is a demand vector, • for each peripheral tree T ( v ) (with v ∈ V H ′ ), the demand vector b truncated to the tree, b T ( v ) ∈ R V T ( v ) , is defined as: b T ( v ) ,w := ( b w if w = v − P u ∈ T ( v ) ,u = v b u otherwise . • the demand vector on V H ′ that’s the sum of b over the corresponding trees, b H ′ , is definedas b H ′ ,v := X w ∈ V ( T ( v )) b w ∀ v ∈ V H ′ . j -tree. Lemma 6.20.
Let H = ( V H , E H , c H ) be a j -tree with core V H ′ and b be a demand vector. Ifthe following two conditions hold, • for each peripheral tree T ( v ) (with v ∈ V H ′ ), b T ( v ) can be routed on T ( v ) with congestionat most ρ ; and • b H ′ can be routed in H ′ with congestion at most ρ ,then b can be routed in H with congestion at most ρ as well. This lemma is phrased in terms of routings due to max-flow/min-cut. Note that all the cuts inthe peripheral trees T ( v ) and the core graph H ′ are valid cuts to be considered in H as well. Sochecking the congestions of these two routings on the peripheral trees and core graphs providea lower bound on the congestion needed to route b as well. Proof.
This follows from the fact that the minimum cut must have both pieces connected: if atree edge is disconnected, then one of the pieces must fall entirely within one of the peripheraltrees, and is captured by the tree cuts. Otherwise the only edges cut are within the core graph,and the cut is also one of the cuts on H ′ .Alternatively, we can use a flow based proof where we route b . Consider routing b in two stages:first, all the demand in each tree moves to its root. The congestion of this is exactly the maxcongestion of a tree edge, since the flow across a tree is uniquely determined by the vertexdemands. Then we route all the demands at the root vertices across the core graph.We also verify that all the cuts checked on the trees and cores are also valid cuts in G , and thuswhat we find is also a lower bound to the optimum congestion as well. Lemma 6.21.
Let H = ( V H , E H , c H ) be a j -tree with core V H ′ and and b be a demand vector.Then the demand vector migrated onto H ′ as given in Definition 6.19 satisfies opt H ′ ( b H ′ ) ≤ opt H ( b ) . Proof.
Take the cut based interpretation of min-cut, where we maximize over the congestion ofcut.For every cut on V H ′ , we can extend it to a cut in G by putting, for every core vertex v ∈ V H ′ , T ( v ) on the same side of v in the cut.Because each T ( v ) is on the same side of the cut, the total number of edges cut is unchanged.Furthermore, the sum of the demand b s on the side of the cut is unchanged by the way weconstructed b H ′ ,v . Thus, the set of cuts we examined on V H ′ is a subset of the cuts of G , whichmeans the max congested cut in G has a higher or equal value.Thus, we can recurse on all core graphs after sparsifying them. This leads to the algorithm CongestionApproximator whose pseudocode is given Algorithm 1. For sake of brevity, we donot explicitly define the functions needed to apply R and R ⊤ (i.e. to compute R b and R ⊤ y lgorithm 1 Pseudocode for Constructing Congestion Approximator
CongestionApproximator ( G = ( V G , E G , c G ) , t, r ) : • Implicitly append a row corresponding to the cut { u } for the only edge uv if exists. Return. • Using theorem Lemma 6.18 with parameter t = Θ( m /r log O (1) m log U ), compute distri-bution ( λ i , H i ) ti =1 of max(1 , m log O (1) m log Ut )-trees. • For i = 1 . . . t : – For each edge e in the forest of H i : ∗ Let S be the set of vertices (which form a subtree) that are disconnected withthe core of H i when cutting e . Let c H i be the capacity vector of H i . Implicitlyappend a row that measures b S / c H i ,S = b S / c H i ,e to the congestion approximator R : · This corresponds to a row r ⊤ = χ S / c H i ,S , the indicator vector of vertices in S times 1 / c H i ,S . · For a demand vector b , the corresponding row in R b , r ⊤ b can be computedby data structures that compute sum of values in a subtree in O (log n ) time. · For computing R ⊤ y for some vector y , the new row contributes r y j to theresult. This can be computed by adding the value 1 / c H i ,S to all nodes in asubtree, which also takes O (log n ) time using tree data structures. – Let H ′ i be the core graph of H i (with edges in the forest of H i removed). Set upmappings from rows of congestion approximator on V H ′ i to rows of congestion ap-proximator on G : ∗ A row ˜ r of congestion approximator R on V H ′ i is mapped to a row r on V G byduplicating the value on v to all vertices in T ( v ), i.e. r u = ˜ r v for all u ∈ T ( v ).This implies the following mappings of demands / dual variables to and from V H ′ i : ∗ For each u ∈ T ( v ), b u gets added to b v . We call the new vector on V H ′ i b H ′ i . ∗ Any congestion approximator R ′ on V H ′ i , will be extended back to all verticesby duplicating the value at vertex v , ( R ′⊤ y ) v to all vertices in T ( v ). This isequivalent to extending every r ′ by duplicating r ′ v to all entries in T ( v ). – Set ˜ H ′ i ← Sparsify ( H ′ i , r ), where Sparsify was introduced in Corollary 6.4. – Recursively call
CongestionApproximator ( ˜ H ′ i , t, r ). The rows implicitly created bythis call is implicitly mapped back by the mapping above.56or vectors b and y ), but only implicitly describe how they are computed together with thisrecursion.We analyze the correctness of the congestion approximator produced, and the overall runningtime, below. Proof of Lemma 6.12.
Let t = l m /r m and run CongestionApproximator ( G , t , r ). We mayassume m = t k by padding edges and prove the bounds by induction on k . When k is 0, thestatement is true as R is an 1-congestion approximator of G . Assume k = 0. We first boundthe quality of the congestion approximator produced. We will show k R b k ∞ (log m ) − kO ( r ) ≤ opt G ( b ) ≤ (log m ) kO ( r ) k R b k ∞ for every integer k ≤ r . In the top level, k is equal to r . We scale R up by (log m ) O ( r ) in thetop level to meet the definition of congestion approximator.We first begin by showing that k R b k ∞ ≤ opt G ( b )(log m ) kO ( r ) .For each cut S corresponding to some forest edge e with capacity c H i ,e : We have (cid:12)(cid:12)(cid:12) χ ⊤ S / c H i ,e b (cid:12)(cid:12)(cid:12) = b S / c H i ,e ≤ b S / c G,S ≤ opt G ( b ) , where χ S is the indicator vector of vertices in S and c G is the capacity vector of G , and χ S isthe indicator vector for S with 1 at all u ∈ S and 0 everywhere else.Recall that ˜ H ′ i = Sparsify ( H ′ i , r ) is a spectral sparsifier of H ′ i . Let ˜ r be any row of the conges-tion approximator computed by the recursive call CongestionApproximator ( ˜ H ′ i , k ) (i.e. one rowimplicitly added in that recursive call). By Corollary 6.4, ˜ H ′ i has at most m/ Ω( t log m log U ) · O (log m log U ) ≤ m/t edges. By inductive hypothesis, (cid:12)(cid:12)(cid:12) ˜ rb H ′ i (cid:12)(cid:12)(cid:12) ≤ (log m ) ( k − O ( r ) opt ˜ H ′ i ( b H ′ i ). As˜ r is mapped to r by duplicating the value on v to all vertices in T ( v ), | r b | = (cid:12)(cid:12)(cid:12) ˜ r b H ′ i (cid:12)(cid:12)(cid:12) . Since˜ H ′ i ≤ c (log m ) O ( r ) H ′ i , by the multicommodity max-flow/min-cut theorem [LR99], opt ˜ H ′ i ( b H ′ i ) ≤ (log m ) O ( r ) (log n ) opt H ′ i ( b H ′ i ) = (log m ) O ( r ) opt H ′ i ( b H ′ i ). By Lemma 6.21, opt H ′ i ( b H ′ i ) ≤ opt H i ( b ).Since G embeds into H i , opt H i ( b ) ≤ opt G ( b ). Thus, | r b | = (cid:12)(cid:12)(cid:12) ˜ r b ( i ) (cid:12)(cid:12)(cid:12) ≤ (log m ) ( k − O ( r ) opt ˜ H ′ i ( b ( i ) ) ≤ (log m ) kO ( r ) opt H ′ i (cid:16) b ( i ) (cid:17) ≤ (log m ) kO ( r ) opt H i ( b ) ≤ (log m ) kO ( r ) opt G ( b ) . Next, we show that opt G ( b ) ≤ k R b k ∞ (log m ) kO ( r ) . Let S be a subset of V such that opt G ( b ) = b S c S . Since P λ i H i can be embedded into G with congestion ˜ O (log m ), there exists an i such that c H i ,S ≤ ˜ O (log m ) c G,S where c H i ,S is the total capacity of edges leaving S in H i . Thus, opt G ( b ) = b S c G,S ≤ ˜ O (log m ) | b S | c H i ,S ≤ ˜ O (log m ) opt H i ( b ) . By Lemma 6.20, opt H i ( b ) ≤ max { opt H ′ i ( b H ′ i ) , ρ } . where ρ = max { b S / c H i ,S | S is the tree cut corresponds to e } . b S / c H i ,S is upper bounded by k R b k ∞ since χ S / c H i ,S is a row of R . opt H ′ i ( b H ′ i ) is upper boundedby (log m ) O ( r ) opt ˜ H ′ i ( b H ′ i ) since ˜ H ′ i ≤ c (log m ) O ( r ) H ′ i . By inductive hypothesis, opt ˜ H ′ i ( b H ′ i ) ≤ (cid:13)(cid:13)(cid:13) ˜ R b H ′ i (cid:13)(cid:13)(cid:13) ∞ (log m ) ( k − O ( r ) R is the congestion approximator computed by CongestionApproximator ( ˜ H ′ i , t ). ˜ R ismapped to a submatrix R ( i ) in R such that R ( i ) b = ˜ R b H ′ i . Thus, (cid:13)(cid:13)(cid:13) ˜ R b H ′ i (cid:13)(cid:13)(cid:13) ∞ = (cid:13)(cid:13)(cid:13) R ( i ) b (cid:13)(cid:13)(cid:13) ∞ ≤ k R b k ∞ . Connecting these inequalities gives opt H ′ i ( b H ′ i ) ≤ (log m ) O ( r ) opt ˜ H ′ i ( b H ′ i ) ≤ (cid:13)(cid:13)(cid:13) ˜ R b H ′ i (cid:13)(cid:13)(cid:13) ∞ (log m ) kO ( r ) ≤ k R b k ∞ (log m ) kO ( r ) . Together with ρ = max { b S / c e | S is the tree cut corresponds to e } ≤ k R b k ∞ , this completes theinductive step.Then it remains to bound the total running time and size of approximators. There are r ≤ O (log m ) levels of recursion and the capacity ratio increases by a factor of O ( m ) at each levelby Lemma 6.18, so the capacity ratio is always at most U max ≤ O ( m O (log m ) U ). At eachlevel, each of the at most max { , m log O (1) m log U max t } -trees have at most O ( m log O (1) m log U max t )vertices, which by the increases in sizes in the cut sparsifiers from Corollary 6.4 gives atmost O ( m log O (1) m log U max t ) log m log U max ≤ O ( m log O (1) m log Ut ) ≤ m − /r edges for large enough t = Ω( m /r log O (1) m log U ). Summing across the t graphs in the distribution gives a totalsize of O ( m log O (1) m log U ) ≤ O ( m log O (1) ( mU )) edges, which is an increase by a factor of O (cid:16) log O (1) ( mU ) (cid:17) across each level of the recursion. Thus, the total sizes of these graph after r levels of recursion is O (cid:16) m log O ( r ) ( mU ) (cid:17) . The total running time is dominated by sparsifyingthe graphs after r levels which is O (cid:16) m O (1 /r ) log O ( r ) m (cid:17) .This size serves as an upper bound on the total number of cuts examined in the trees. Multiplyingin the cost of running the sparsifier then gives the overall runtime as well.Furthermore, the reduction of b to the core graph consists of summing over the trees, and takestime linear in the size of the trees. Thus the cost of computing matrix-vector products in R and R ⊤ follow as well. BalCutPrune in Low Conductance Regime
In this section we complete the proof of Theorem 1.2, by strengthening the results of Theorem 5.1,so that the running time no longer depends on the conductance parameter φ . In order to doso, we start by introducing a new algorithm for the matching player. The algorithm is ananalogue of the algorithm from Theorem 3.8, except that its running time no longer depends on φ . This is achieved by using Corollary 6.10 in order to compute flows and cuts, instead of thepush-relabel algorithm from Lemma 3.9. However, the new algorithm for the matching playerdoes not return the routing paths, and instead only returns a partial matching, for which weare guaranteed that the routing paths exist. The second obstacle is that we can no longer usethe algorithm for expander pruning from Theorem 2.6, since its running time also depends onthe parameter φ . Instead, we use a different high-level approach. We define the Most-BalancedCut problem, and provide a bi-criteria approximation algorithm for it, that exploits the cut-matching game, the algorithm from Theorem 1.6 for the cut player, and the new algorithm forthe matching player. We then show that this approximation algorithm can be used in order to58pproximately solve the BalCutPrune problem. We start with providing a new algorithm for thematching player in Section 7.1.
The new algorithm for the matching player is summarized in the following theorem. The theoremand its proof are similar to Lemma B.18 of [NS17].
Theorem 7.1.
There is a deterministic algorithm, that we call
MatchOrCut , that, given an m -edge graph G = ( V, E ) , two disjoint subsets A, B of its vertices, where | A | ≤ | B | and | A | = N ,and parameters z ≥ , < ψ < / , computes one of the following: • either a partial matching M ⊆ A × B with | M | > N − z , such that there exists a set P = { P ( a, b ) | ( a, b ) ∈ M } of paths in G , where for each pair ( a, b ) ∈ M , path P ( a, b ) connects a to b , and the paths in P cause congestion at most O (cid:16) log nψ (cid:17) ; or • a cut ( X, Y ) in G , with | X | , | Y | ≥ z/ , and Ψ G ( X, Y ) ≤ ψ .The running time of the algorithm is O (cid:16) m o (1) (cid:17) . We note that, if the algorithm returns the matching M , then it does not explicitly compute thecorresponding set P of paths. Note also that, if the parameter z = 1, and the algorithm returnsa matching M , then | M | = N must hold, that is, every vertex of A is matched. Proof.
Let x = ⌈ /ψ ⌉ , and let ψ ′ = 1 /x , so that 1 /ψ ′ is an integer, and ψ/ ≤ ψ ′ ≤ ψ . Noticethat it is enough to prove Theorem 7.1 for parameter ψ ′ instead of ψ , so for simplicity, we denote ψ ′ by ψ from now on. We use the following lemma as a subroutine. Lemma 7.2.
There is a deterministic algorithm, that, given an m -edge graph G = ( V, E ) , twodisjoint subsets A ′ , B ′ of its vertices, such that | A ′ | ≤ | B ′ | and | A ′ | = N ′ , and a parameter < ψ < such that /ψ is an integer, computes one of the following: • either a partial matching M ′ ⊆ A ′ × B ′ with | M ′ | ≥ Ω( N ′ ) , such that there exists a set P ′ = { P ( a, b ) | ( a, b ) ∈ M ′ } of paths in G , where for each pair ( a, b ) ∈ M , path P ( a, b ) connects a to b , and the paths in P ′ cause congestion O (1 /ψ ) ; or • a cut ( X, Y ) in G , with | X | , | Y | ≥ N ′ / , and Ψ G ( X, Y ) ≤ ψ .The running time of the algorithm is O ( m o (1) ) . We provide the proof of Lemma 7.2 below, after we prove Theorem 7.1 using it. Our algorithmperforms a number of iterations. We maintain a current matching M , starting with M = ∅ , andsubsets A ′ ⊆ A, B ′ ⊆ B of vertices that do not participate in the current matching M , startingwith A ′ = A, B ′ = B . For the sake of analysis, we keep track of a set P = { P ( a, b ) | ( a, b ) ∈ M } of paths in G , where for each ( a, b ) ∈ M , path P ( a, b ) connects a to b in G (but this set of pathsis not explicitly computed by the algorithm). We perform iterations as long as | A ′ | ≥ z . In every59teration, we apply the algorithm from Lemma 3.3 to the current two sets A ′ , B ′ of vertices. Ifthe outcome of the algorithm is a cut ( X, Y ) with | X | , | Y | ≥ N ′ /
2, and Ψ G ( X, Y ) ≤ ψ , thenwe say that the current iteration terminated with a cut. We then terminate the algorithm, andreturn the cut ( X, Y ) as its output. Since | A ′ | ≥ z , we are guaranteed that | X | , | Y | ≥ z/ M ′ ⊆ A ′ × B ′ with | M ′ | ≥ Ω( N ′ ), suchthat there exists a set P ′ = { P ( a, b ) | ( a, b ) ∈ M ′ } of paths in G , where for each pair ( a, b ) ∈ M ,path P ( a, b ) connects a to b , and the paths in P ′ cause congestion O (1 /ψ ). We then say thatthe current iteration terminated with a matching. We add the pairs in M ′ to the matching M ,and delete from sets A ′ , B ′ all vertices that participate in M ′ . Also, for the sake of analysis, weimplicitly add the paths in P ′ to the set P .Notice that in every iteration, | A ′ | is guaranteed to reduce by a constant factor, so the numberof iterations is O (log n ), and the total running time of the algorithm is O ( m o (1) ). If everyiteration of the algorithm terminated with a matching, then at the end of the algorithm, | A ′ | < z ,and so | M | > N − z . Moreover, there exists a set P = { P ( a, b ) | ( a, b ) ∈ M } of paths in G , wherefor each pair ( a, b ) ∈ M , path P ( a, b ) connects a to b — the set P of paths that we have implicitlymaintained. The congestion caused by this set of paths is O (log n/ψ ), since there are O (log n )iterations, and the set of paths corresponding to each iteration causes congestion O (1 /ψ ). Inorder to complete the proof of Theorem 7.1, it now remains to prove Lemma 7.2. Proof of Lemma 7.2.
We can assume that ψ ≥ /n , as otherwise the problem is trivial tosolve, since we are allowed to compute a routing of A ′ , B ′ with congestion n . We construct anew edge-capacitated graph ˆ G , as follows. We start with ˆ G = G , and we set the capacity c e ofevery edge e ∈ E to be 1 /ψ (recall that 1 /ψ is an integer). Next, we introduce a source vertex s , that connects to every vertex in A ′ with an edge of capacity 1, and a destination vertex t ,that connects to every vertex in B ′ with an edge of capacity 1. We set the demand b ( s ) = N ′ / b ( t ) = − N ′ /
2, and for every vertex v = s, t , we set b ( v ) = 0. We then apply the algorithm fromCorollary 6.10 to this new capacitated graph ˆ G , the resulting demand vector b , and accuracyparameter ǫ = 1 /
2. Note that the ratio of largest to smallest edge capacity is O (1 /ψ ) = O ( n ),and so the running time of the algorithm from Corollary 6.10 is O (cid:16) m o (1) (cid:17) .We now consider two cases. Assume first that the algorithm computes a cut S with P e ∈ E ( S,S ) c e < | P v ∈ S b v | . Since P v ∈ V b v = 0, we can assume w.l.o.g. that P e ∈ E ( S,S ) c e < P v ∈ S b v , by switchingthe sides of the cut if necessary. Clearly, s ∈ S , t S must hold, and so P e ∈ E ( S,S ) c e < N ′ / N ′ / A ′ must lie in S (as otherwise edges connecting them to s will contribute capacity at least N ′ / N ′ / B ′ must lie in S . We set X = S \ { s } , and Y = S \ { t } , and return the cut ( X, Y ) as the outcomeof the algorithm. As observed above, | X | , | Y | ≥ N ′ / E G ( X, Y ) contributes capacity 1 /ψ to the cut ( S, S ), we get that | E G ( X, Y ) | ≤ ψ · N ′ /
2. Weconclude that Ψ G ( X, Y ) ≤ ψ .We now consider the second case, where the algorithm from Corollary 6.10 returns an s - t flow f of value at least N ′ / G , that causes edge-congestion at most (1 + ǫ ) ≤ link-cut tree data structure of Sleator and Tarjan [ST83]. The data structure main-tains a forest F of rooted trees, over a set V of n vertices, with edge costs w ( e ) for e ∈ E ( F ),and supports the following operations (we only list operations that are relevant to us):60 Root( v ): return the root of the tree containing the vertex v ; • MinCost( v ): return the vertex x closest to Root( v ), such that the edge connecting x to itsparent in the tree has minimum cost among all edges lying on the unique path connecting v to Root( v ). • Parent( v ): return the parent of v in the tree containing v ; • Update( v, w ): update the costs of all edges lying on the path connecting v to Root( v ), byadding w to the cost of each edge (we note that w may be negative); • Link( u, v, w ): for vertices u , v that lie in different trees, add an edge ( u, v ) of cost w ; • Evert( v ): make v the root of the tree containing v ; and • cut ( v ): delete the edge connecting v to Parent( v ) (this operation assumes that v =Root( v )).Sleator and Tarjan [ST83] showed a deterministic algorithm for maintaining the link-cut treedata structure, with O (log n ) worst-case time per operation.We also use the algorithm of [KP15], that, given any graph H with integral capacities c e ≥ s and t , and any (possibly fractional) s - t flow f of value Λin H , that does not violate the edge capacities, computes, in time ˜ O ( m ), an integral s - t flow f ′ of value at least Λ, that does not violate the edge capacities. The algorithm of [KP15] isdeterministic, and relies on link-cut trees.We apply the algorithm of [KP15] to graph ˆ G , and the flow f ′ = f /
2. Note that flow f ′ doesnot violate the capacities of edges in ˆ G , and that its value is at least N ′ /
4. We denote by f ′′ the integral flow of value at least N ′ / f ′′ naturally defines a flow ˆ f in the original graph G , of value at least N ′ /
4, from vertices of A ′ to vertices of B ′ . Since flow f ′′ obeys the edge capacities, every vertex in A ′ sends either 0 or1 flow units in ˆ f , and every vertex in B ′ receives either 0 or 1 flow units in ˆ f . We denote by A ′′ ⊆ A ′ the set of vertices that send one flow unit, and by B ′′ ⊆ B ′ the set of vertices thatreceive one flow unit in f ; observe that | A ′′ | = | B ′′ | ≥ N ′ / f . However, computingthe flow-paths explicitly may take too much time, so instead, we would like to only computepairs of vertices that serve as endpoints of the paths in the decomposition. We do so usinglink-cut trees. The algorithm, that we refer to as ComputeMatching is very similar to thealgorithm for computing blocking flows in [ST83] (see Section 6 of [ST83]) and is included herefor completeness.We will gradually construct the desired matching M ′ ⊆ A ′′ × B ′′ , and we will implicitly maintaina set of paths P ′ = { P ( a, b ) | ( a, b ) ∈ M ′ } routing the pairs in M ′ . The set P ′ of paths can beobtained by computing the flow decomposition of ˆ f . However, our algorithm will not computethe path set P ′ explicitly (as this would take too much time), and instead will only guaranteeits existence.We maintain a directed graph H with V ( H ) = V ( G ) and E ( H ) containing all edges e of G withˆ f ( e ) = 0 (the direction of the edge is in the direction of the flow). For every vertex v , we denoteby out( v ) the set of all edges that leave v in H .61e will also maintain an (undirected) forest F with V ( F ) = V ( H ), whose edges are a subset of E ( H ) (though they do not have direction anymore). Further, we will ensure that the followinginvariants hold throughout the algorithm:I1. for every vertex v ∈ V ( F ), at most one edge of out( v ) belongs to F ;I2. for every tree T in the forest F , if v is the root of T , then no edge of out( v ) lies in F ; ;andI3. for every tree T and vertex u ∈ V ( T ), there is a directed path in graph H connecting u to the root of T , that only contains edges that lie in T .Throughout the algorithm, we (implicitly) maintain a valid integral flow from vertices of A ′′ tovertices of B ′′ , as follows. For every edge e ∈ E ( F ), the flow on e is the cost w ( e ) of the edgein F , and the direction of the flow is the same as the direction of the edge in H . For an edge e ∈ E ( H ) that does not lie in F , the flow on e is the value ˆ f ( e ) (that may be updated over thecourse of the algorithm). An edge that carries 0 flow units is deleted from both H and F . Wewill ensure that for every edge e ∈ F , w ( e ) ≥ UpdateFlow ( x, w ) that receives as input a vertex x of the forest F and an integer w , such that, if we denote by P x the unique path connecting x to the root r of the tree of F containing x , then for every edge e ∈ P x , w ( e ) ≥ w . The procedure decreasesthe cost of every edge on path P x by w . Additionally, it deletes every edge e ∈ P x whose newcost becomes 0, while maintaining all invariants; each such edge is also deleted from H . Theprocedure starts by executing Update( x, − w ), that decreases the weight of every edge on path P x by w . Next, we iteratively remove edges from F whose new cost becomes 0. In order todo so, we maintain a current vertex u , starting with u = MinCost( x ). An iteration is executedas follows. Let T denote the tree of F containing u , let r be the root of T , and let u ′ be theparent of u in the tree. If w ( u, u ′ ) = 0, then we terminate the algorithm. Otherwise, we execute cut ( u ), deleting the edge ( u, u ′ ) from the tree T , that decomposes into two subtrees: tree T ′ containing u and tree T ′′ containing u ′ and r . The root of tree T ′′ remains r , while the root of T ′ becomes u . Observe that all invariants continue to hold. We also delete the edge ( u, u ′ ) fromthe graph H . We then set u = MinCost( x ), and continue to the next iteration.We now describe the algorithm ComputeMatching . The algorithm initializes the forest F tocontain the set V ( G ) of vertices and no edges. Notice that all invariants hold for F . It theniteratively considers every vertex a ∈ A ′′ one-by-one and applies procedure Process ( a ) to eachsuch vertex.We now describe procedure Process ( a ). The goal of the procedure is to find a vertex b ∈ B ′′ such that some path P connecting a to b carries one flow unit in the current flow. We do notcompute the path P explicitly, but we reduce, for each edge e ∈ P , the amount of flow that e carries by one unit. The procedure consists of a number of iterations. At the beginning ofevery iteration, we start with the current vertex v , which is set to be Root( a ). The iterationsare executed as long as v B ′′ . Notice that, from our invariant, no edge in out( v ) lies in F . Let T denote the tree of F containing v . We now consider three cases. First, if out( v ) = ∅ , then itis impossible to reach vertices of B ′′ from v in the current graph H . We iteratively delete everyedge ( y, v ) that belongs to the tree T , using operation cut ( y ). This operation splits the tree T pdateFlow ( x, w ): • Execute Update( x, − w ). • Set u ← MinCost( x ). • let T be the tree containing u , let r be the root of T and let u ′ be the parent of u in thetree T . • while w ( u, u ′ ) = 0 and u = Root( u ): – Delete edge ( u, u ′ ) from H . – Execute cut ( u ). This decomposes T into two sub-trees: tree T ′ containing u , andtree T ′′ containing u ′ and r . The root of T ′ becomes u and the root of T ′′ becomes r . – Update u ← MinCost( x ). Update T to be the tree of F containing the new vertex u ,let r be the root of T , and let u ′ the parent of u in T (if u = Root( u ), set u ′ = u ). ComputeMatching : • Initialize F to contain the set V ( G ) of vertices and no edges. • For all a ∈ A ′′ : execute Process ( a ).into two subtrees, one whose root remains v , and one whose root becomes y . It is easy to verifythat all invariants continue to hold. The second case is when some edge ( v, u ) ∈ out( v ) exists in H , and u V ( T ) (this can be checked by running Root( u ) and comparing the outcome with v ).Let T ′ be the tree of F that contains u , and let r ′ be its root. We join the two trees by usingoperation Link( v, u, w ), where w is the current flow on edge ( v, u ) in graph H . The root of thenew tree becomes r ′ . It is immediate to verify that Invariants I1 and I2 continue to hold. Inorder to verify that Invariant I3 continues to hold, note that for every vertex x ∈ V ( T ′ ), there isa directed path P x in graph H , connecting x to r ′ , that only contains edges of T ′ . Consider nowsome vertex y ∈ V ( T ). From Invariant I3, there is a directed path P y in graph H , connecting y to v , that only contains edges of T . By using the edge ( v, u ) and the path P u connecting u to r ′ , we obtain a directed path in graph H , connecting y to r ′ , that only uses edges that lie inthe new tree. The third and the last case is when the endpoint u edge ( v, u ) ∈ out( v ) lies inthe tree T . In this case, there is a directed cycle in graph H , that includes the edge e = ( v, u )and the path P u that is contained in T and connects u to v . We let w be the minimum betweenthe current flow ˆ f e on edge e , and the smallest value w ( e ′ ) of an edge e ′ ∈ P u , that can becomputed by executing x ← MinCost( u ) and inspecting the edge that connects x to its parentin T . We decrease the value ˆ f e by w ; if the value becomes 0, then we delete the edge e from H .Additionally, we execute UpdateFlow ( u, w ).The iterations are terminated once b = Root( a ) is a vertex of B ′′ . We then add the pair ( a, b )to M ′ . The intended path for routing this pair is the path P a in the tree T containing a thatconnects a to b . We execute UpdateFlow ( a,
1) in order to decrease the flow on this path by one63 rocess ( a ): • Let v ← Root( a ), and let T be the tree of F containing v . • While v B ′′ : – If out( v ) = ∅ : ∗ for every child z of v , execute cut ( z ); vertex z becomes the root of the newlycreated tree. – Otherwise, let ( v, u ) ∈ out( v ) be any edge of out( v ). ∗ If u T : · Let T ′ be the tree of F containing u and let r ′ be its root. · Execute Link( u, v, w ), where w is the current flow value ˆ f ( e ) of the edge e = ( v, u ). The root of the new merged tree becomes r ′ . ∗ Otherwise: · Let x ← MinCost( u ), and let w be the cost of the edge connecting x to itsparent in T . · Let w be the flow ˆ f ( e ) on the edge e = ( v, u ) in H . · Set w = min { w , w } . · Set ˆ f ( e ) ← ˆ f ( e ) − w . If ˆ f ( e ) = 0, delete e from H . · Execute
UpdateFlow ( u, − w ). • Add ( a, v ) to M ′ . • Execute
UpdateFlow ( v, − P ′ = { P ( a, b ) | ( a, b ) ∈ M ′ } of paths in G , where path P ( a, b ) connects a to b ,such that the paths in P ′ cause congestion at most O (1 /ψ ).In order to analyze the running time of the algorithm, observe that every edge may be insertedat most once into F and deleted at most once from F (this is since an edge is only deleted from F when the flow on the edge becomes 0; at this point the edge is also deleted from H and isnever again inserted into H or F .) Observe that the number of update operations of the link-cuttree data structure due to a single call to UpdateFlow subroutine is O (1 + n ′ ), where n ′ is thenumber of edges that were deleted from F during the procedure (notice that it is possible thatno edge is deleted from F during the procedure). Whenever procedure UpdateFlow is called,we either delete at least one edge from F , or we delete at least one edge from H (when weeliminate a flow cycle), or we add one pair to matching M ′ . Therefore, the total number ofupdate operations of the link-cut tree data structure due to UpdateFlow subroutine is O ( m ).We now consider the execution of procedure Process ( a ), ignoring the calls to UpdateFlow F or deleted from F .It is then easy to see that the total running time of ComputeMatching is O ( m log n ). In the Most Balanced Sparse Cut problem, the input is a graph G = ( V, E ), and a parameter0 < ψ ≤
1. The goal is to compute a cut (
X, Y ) in G , with Ψ G ( X, Y ) ≤ ψ , of maximum size ,which is defined to be min {| X | , | Y |} . The problem (or its variations) was defined independentlyby [NS17] and [Wul17], and it was also used in [CK19] and [CS19]. As observed in these works,one can obtain a bi-criteria approximation algorithm for this problem by using the cut-matchinggames. The following two lemmas summarize these algorithms, where we employ Algorithm CutOrCertify from Theorem 1.6 for the cut player, and Algorithm
MatchOrCut fromTheorem 7.1 for the matching player, in order to implement them efficiently.
Lemma 7.3.
There are universal constants N , c , and a deterministic algorithm, that, givenan n -vertex and m -edge graph G = ( V, E ) and parameters < ψ ≤ , < z ≤ n and r ≥ ,such that n /r ≥ N : • either returns a cut ( X, Y ) in G with Ψ G ( X, Y ) ≤ ψ and | X | , | Y | ≥ z ; • or correctly establishes that for every cut ( X ′ , Y ′ ) in G with Ψ G ( X ′ , Y ′ ) ≤ ψ/ (log n ) c r , min {| X ′ | , | Y ′ |} < c z · (log n ) c r holds.The running time of the algorithm is O (cid:16) m O (1 /r )+ o (1) · (log n ) O ( r ) (cid:17) .Proof. The algorithm employs the cut-matching game, and will maintain a set F of fake edges.We assume that n is an even integer; otherwise we add a new isolated vertex v to G , and we adda fake edge connecting v to an arbitrary vertex of G to F . We also maintain a graph H , thatinitially contains the set V of vertices and no edges. We then perform a number of iterations,that correspond to the cut-matching game. In every iteration i , we will add a matching M i tograph H . We will ensure that the number of iterations is bounded by O (log n ), so the maximumvertex degree in H is always bounded by O (log n ). At the beginning of the algorithm, graph H contains the set V of vertices and no edges. We now describe the execution of the i th iteration.In order to execute the i th iteration, we apply Algorithm CutOrCertify from Theorem 1.6to graph H , where the constant N and the parameter r remain unchanged. Assume first thatthe output of the algorithm from Theorem 1.6 is a cut ( A i , B i ) in H with | A i | , | B i | ≥ n/ | E H ( A, B ) | ≤ n/ A ′ i , B ′ i ) of V ( G ) with | A ′ i | = | B ′ i | such that A i ⊆ A ′ i and B i ⊆ B ′ i . We treat the cut ( A ′ i , B ′ i ) as the move of the cut player.Then, we apply Algorithm MatchOrCut from Theorem 7.1 to the sets A ′ i , B ′ i of vertices, asparsity parameter ψ ′ = ψ/ z ′ = 4 z . If the algorithm returns a cut ( X, Y ) in G , with | X | , | Y | ≥ z ′ / ≥ z , and Ψ G ( X, Y ) ≤ ψ ′ , then we terminate the algorithm and returnthe cut ( X, Y ), after we delete the extra vertex v from it (if it exists). It is easy to verifythat | X | , | Y | ≥ z and Ψ G ( X, Y ) ≤ ψ must hold. Otherwise, the algorithm from Theorem 7.165omputes a partial matching M ′ i ⊆ A ′ i × B ′ i with | M ′ i | ≥ N − z , such that there exists a set P ′ i = { P ( a, b ) | ( a, b ) ∈ M ′ i } of paths in G , where for each pair ( a, b ) ∈ M ′ i , path P ( a, b ) connects a to b , and the paths in P ′ i cause congestion at most O (cid:16) log nψ (cid:17) . We let A ′′ i ⊆ A ′ i , B ′′ i ⊆ B ′ i bethe sets of vertices that do not participate in the matching M ′ i , and we let M ′′ i be an arbitraryperfect matching between these vertices. We define a set F i of fake edges, containing the edgesof M ′′ i , and an embedding P ′′ i = { P ( e ) | e ∈ F i } of the edges in M ′′ i , where each fake edge isembedded into itself. Lastly, we set M i = M ′ i ∪ M ′′ i , add the edges of M i to H , and continue tothe next iteration. Notice that | F i | ≤ z .We perform the iterations as described above, until Algorithm CutOrCertify from Theo-rem 1.6 returns a subset S ⊆ V of at least n/ G [ S ]) ≥ / (log n ) O ( r ) .Recall that Theorem 2.5 guarantees that this must happen after at most O (log n ) iterations.We then perform one last iteration, whose index we denote by q .We let B q = S and A q = V ( G ) \ S , and apply Algorithm MatchOrCut from Theorem 7.1 tothe sets A q , B q of vertices, a sparsity parameter ψ ′ = ψ/ z ′ = 4 z . As before, ifthe algorithm returns a cut ( X, Y ) in G , with | X | , | Y | ≥ z ′ / ≥ z and Ψ G ( X, Y ) ≤ ψ ′ , then weterminate the algorithm and return the cut ( X, Y ), after we delete the extra vertex v from it(if it exists). As before, we get that | X | , | Y | ≥ z and Ψ G ( X, Y ) ≤ ψ . Otherwise, the algorithmfrom Theorem 7.1 computes a partial matching M ′ q ⊆ A ′ q × B ′ q with | M ′ q | ≥ N − z , such thatthere exists a set P ′ q = n P ( a, b ) | ( a, b ) ∈ M ′ q o of paths in G , where for each pair ( a, b ) ∈ M ′ q ,path P ( a, b ) connects a to b , and the paths in P ′ q cause congestion at most O (cid:16) log nψ (cid:17) . We let A ′ q ⊆ A q , B ′ q ⊆ B q be the sets of vertices that do not participate in the matching M ′ q , and we let M ′′ q be an arbitrary matching that connects every vertex of A ′ q to a distinct vertex of B ′ q (sucha matching must exist since | A q | ≤ | B q | ). As before, we define a set F q of fake edges, containingthe edges of M ′′ q , and an embedding P ′′ q = { P ( e ) | e ∈ F q } of the edges in M ′′ q , where each fakeedge is embedded into itself. Lastly, we set M q = M ′ q ∪ M ′′ q , and we add the edges of M q tograph H .From now on we assume that the algorithm never terminated with a cut ( X, Y ) with | X | , | Y | ≥ z and Ψ G ( X, Y ) ≤ ψ . Note that, from Observation 2.3, the final graph H is a ψ ′ -expander, for ψ ′ ≥ / (log n ) O ( r ) . Moreover, we are guaranteed that there is an embedding of H into G + F with congestion O (cid:16) log nψ (cid:17) , where F = S ri =1 F i is a set of O ( z log n ) fake edges. Notice that, inthe embedding that we constructed, every edge of H is either embedded into a path consistingof a single fake edge, or it is embedded into a path in the graph G ; every fake edge in F servesas an embedding of exactly one edge of H .We now claim that there is a large enough universal constant c , such that, for every cut( X ′ , Y ′ ) in G with Ψ G ( X ′ , Y ′ ) ≤ ψ/ (log n ) c r , min {| X ′ | , | Y ′ |} < c z · (log n ) c r holds. Indeed,consider any cut ( X ′ , Y ′ ) in G with | X ′ | , | Y ′ | ≥ c z · (log n ) c r . It is enough to show thatΨ G ( X ′ , Y ′ ) > ψ/ (log n ) c r . We assume w.l.o.g. that | X ′ | ≤ | Y ′ | .Notice that ( X ′ , Y ′ ) also defines a cut in graph H , and, since H is a ψ ′ -expander, | E H ( X ′ , Y ′ ) | ≥ ψ ′ · | X ′ | ≥ ψ ′ · c z · (log n ) c r . We partition the set E H ( X ′ , Y ′ ) of edges into two subsets. The firstsubset, E , is a set of edges corresponding to the fake edges (so each edge e ∈ E is embeddedinto a path consisting of a single fake edge), and E contains all remaining edges (each of whichis embedded into a path of G ). Recall that the total number of the fake edges, | F | ≤ O ( z log n ),66hile ψ ′ = 1 / (log n ) O ( r ) . Therefore, by letting c be a large enough constant, we can ensurethat | E | ≤ | E H ( X ′ , Y ′ ) | / H into G + F defines, for every edge e ∈ E a corresponding path P ( e ) in G , that must contribute at least one edge to the cut E G ( X ′ , Y ′ ). Since the embedding causescongestion O (cid:16) log nψ (cid:17) , we get that: | E G ( X ′ , Y ′ ) | ≥ Ω (cid:18) | E H ( X ′ , Y ′ ) | · ψ log n (cid:19) ≥ Ω (cid:18) ψ ′ · ψ · | X ′ | log n (cid:19) ≥ Ω (cid:18) ψ · | X ′ | (log n ) O ( r ) (cid:19) . By letting c be a large enough constant, we get that Ψ G ( X ′ , Y ′ ) ≥ ψ/ (log n ) c r , as required(we note that we have ignored the extra vertex v that we have added to G if | V ( G ) | is odd,but the removal of this vertex can only change the cut sparsity and the cardinalities of X ′ and Y ′ by a small constant factor that can be absorbed in c ).Lastly, we bound the running time of the algorithm. The algorithm consists of O (log n ) itera-tions. Every iteration employs Algorithm CutOrCertify from Theorem 1.6, whose runningtime is O (cid:16) n O (1 /r ) · (log n ) O ( r ) (cid:17) , and Algorithm MatchOrCut from Theorem 7.1, whose run-ning time is O (cid:16) m o (1) (cid:17) . Therefore, the total running time is O (cid:16) m o (1)+ O (1 /r ) · (log n ) O ( r ) (cid:17) . Lemma 7.4.
There are universal constants N , c , and a deterministic algorithm, that, givenan n -vertex m -edge graph G = ( V, E ) and parameters < ψ ≤ and r ≥ , such that n /r ≥ N : • either returns a cut ( X, Y ) in G with Ψ G ( X, Y ) ≤ ψ ; • or correctly establishes that G is a ψ ′ -expander, for ψ ′ = ψ/ (log n ) c r .The running time of the algorithm is O (cid:16) m O (1 /r )+ o (1) · (log n ) O ( r ) (cid:17) .Proof. The proof is almost identical to the proof of Lemma 7.3. The only difference is that weset the parameter z that is used in the calls to Algorithm MatchOrCut from Theorem 7.1 to1. This ensures that no fake edges are introduced. The remainder of the proof is unchanged.We note that Lemma 7.4 immediately gives a deterministic (log n ) r -approximation algorithmfor the Sparsest Cut problem with running time O (cid:16) m O (1 /r )+ o (1) · (log n ) O ( r ) (cid:17) , for all r ≤ O (log n ), proving Theorem 1.4 for the Sparsest Cut problem. The goal of this subsection is to prove the following theorem.67 heorem 7.5.
There is a universal constant N , and a deterministic algorithm, that, given agraph G with m edges, and parameters < φ ≤ and r ≥ , such that m /r ≥ N , computes,in time O (cid:16) m O (1 /r )+ o (1) · (log m ) O ( r ) (cid:17) , a cut ( A, B ) in G with | E G ( A, B ) | ≤ φ · (log m ) O ( r ) · Vol( G ) , such that one of the following holds: • either Vol G ( A ) , Vol G ( B ) ≥ Vol( G ) / ; or • Vol G ( A ) ≥ Vol( G ) / , and graph G [ A ] has conductance at least φ . Notice that Theorem 1.2 immediately follows from Theorem 7.5. The remainder of this sub-section is dedicated to the proof of Theorem 7.5. We set N = 8 N , where N is the universalconstant used in Lemma 7.3 and Lemma 7.4.We start by using Algorithm ReduceDegree from Section 5.2, in order to construct, in time O ( m ), a graph ˆ G whose maximum vertex degree is bounded by 10, and | V ( ˆ G ) | = 2 m . Denote V ( G ) = { v , . . . , v n } . Recall that graph ˆ G is constructed from graph G by replacing each vertex v i with an α -expander H ( v i ) on deg G ( v i ) vertices, where α = Θ(1). For convenience, we denotethe set of vertices of H ( v i ) by V i . Therefore, V ( ˆ G ) is a union of the sets V , . . . , V n of vertices.Consider now some cut subset S of vertices of ˆ G . As before, we say that S is a canonical vertexset iff for every 1 ≤ i ≤ n , either V i ⊆ S or V i ∩ S = ∅ holds. The main subroutine in the proofof Theorem 7.5 is summarized in the following lemma. Lemma 7.6.
There is a universal constant c and a deterministic algorithm, that, given acanonical vertex subset V ′ ⊆ V ( ˆ G ) containing at least | V ( ˆ G ) | / vertices of ˆ G , and parameters < ψ < , < z ′ < z , such that for every partition ( A, B ) of V ′ with | E ˆ G ( A, B ) | ≤ ψ · min {| A | , | B |} , min {| A | , | B |} ≤ z holds, computes a partition ( X, Y ) of V ′ , where both X, Y arecanonical subsets of V ( ˆ G ) , | X | ≤ | Y | (where possibly X = ∅ ), | E ˆ G ( X, Y ) | ≤ ψ · | X | , and one ofthe following holds: • either | X | , | Y | ≥ | V ′ | / (note that this can only happen if z ≥ | V ′ | / ); or • for every partition ( A ′ , B ′ ) of the set Y of vertices with | E ˆ G ( A ′ , B ′ ) | ≤ ψc (log n ) c r · min {| A ′ | , | B ′ |} , min {| A ′ | , | B ′ |} ≤ z ′ must hold (if z ′ < , then graph ˆ G [ Y ] is guaranteed to be a ψc (log n ) c r -expander).The running time of the algorithm is O (cid:16) zz ′ · m O (1 /r )+ o (1) · (log n ) O ( r ) (cid:17) .Proof. We let c be a large enough constant, whose value we set later, and we let ψ ′ = ψ/c . Wealso use a parameter z ∗ = z ′ c c (log n ) c r , where c is the constant from Lemma 7.3 and Lemma 7.4.Assume first that z ∗ ≥
1; we will discuss the other case later.Our algorithm is iterative. At the beginning of iteration i , we are given a subgraph G i ⊆ ˆ G ,such that V ( G i ) ⊆ V ′ is a canonical subset of vertices, and | V ( G i ) | ≥ | V ′ | /
3; at the beginningof the first iteration, we set G = ˆ G [ V ′ ]. At the end of iteration i , we either terminate thealgorithm with the desired solution, or we compute a canonical subset S i ⊆ V ( G i ) of vertices,such that | S i | ≤ | V ( G i ) | /
2, and | E G i ( S i , V ( G i ) \ S i ) | ≤ ψ · | S i | /
2. We then delete the verticesof S i from G i , in order to obtain the graph G i +1 , that serves as the input to the next iteration.68he algorithm terminates once the current graph G i contains fewer than 2 | V ′ | / i th iteration. We assume that the sets S , . . . , S i − ofvertices are already computed, and that P i − i ′ =1 | S i ′ | ≤ | V ′ | /
3. Recall that G i is the sub-graph ofˆ G [ V ′ ] that is obtained by deleting the vertices of S , . . . , S i − from it. Recall also that we areguaranteed that V ( G i ) is a canonical set of vertices, and | V ( G i ) | ≥ | V ′ | / ≥ | V ( ˆ G ) | / ≥ m/ N , we are guaranteed that | V ( G i ) | /r ≥ N . We applyLemma 7.3 to graph G i , with parameters ψ ′ and z ∗ . We now consider two cases.In the first case, the algorithm from Lemma 7.3 returns a cut ( X ′ , Y ′ ) in graph G i with | X ′ | , | Y ′ | ≥ z ∗ , and | E G i ( X ′ , Y ′ ) | ≤ ψ ′ · min {| X ′ | , | Y ′ |} . We then use Algorithm MakeCanon-ical from Lemma 5.4 in order to obtain a cut ( X ′′ , Y ′′ ) of G i , such that both X ′′ , Y ′′ arecanonical vertex sets, | X ′′ | , | Y ′′ | ≥ min {| X ′ | , | Y ′ |} /
2, and | E G i ( X ′′ , Y ′′ ) | ≤ O ( E G i ( X ′ , Y ′ )). Weassume w.l.o.g. that | X ′′ | ≤ | Y ′′ | . Notice that, in particular, | X ′′ | ≥ Ω( z ∗ ), and | E G i ( X ′′ , Y ′′ ) | ≤ O ( ψ ′ · | X ′′ | ). Recall that ψ ′ = ψ/c . By letting c be a large enough constant, we can ensurethat | E G i ( X ′′ , Y ′′ ) | ≤ ψ · | X ′′ | /
2. We set S i = X ′′ . If P ii ′ =1 | S i ′ | ≤ | V ′ | / G i +1 = G i \ S i , and continue to the next iteration. Otherwise, we terminate thealgorithm, and return the partition ( X, Y ) of V ′ where X = S ii ′ =1 S i ′ , and Y = V ′ \ X . Re-call that we are guaranteed that | X | ≥ | V ′ | /
3. Moreover, since | V ( G i ) | ≥ | V ′ | / | S i | ≤ | V ( G i ) | /
2, we are guaranteed that | Y | ≥ | V ( G i ) | / ≥ | V ′ | /
3. Lastly, our algorithm guar-antees that | E ˆ G ( X, Y ) | = P ii ′ =1 | E G i ′ ( S i ′ , V ( G i ′ \ S i ′ ) | ≤ ψ · P ii ′ =1 | S i ′ | ≤ ψ | X | . Since | Y | ≥ | X | / | E ˆ G ( X, Y ) | ≤ ψ | Y | , and altogether, | E ˆ G ( X, Y ) | ≤ ψ · min {| X | , | Y |} .Next, we assume that the algorithm from Lemma 7.3, when applied to graph G i , correctlyestablishes that for every cut ( A ′ , B ′ ) in G i with Ψ G i ( A ′ , B ′ ) ≤ ψ ′ / (log n ) c r , min {| A ′ | , | B ′ |} 1. In this case, in every iteration, we employLemma 7.4 instead of Lemma 7.3. The two main differences are that (i) we are no longerguaranteed that each set S i has large cardinality (the cardinality can be arbitrarily small); and69ii) if the lemma does not return a cut ( X ′ , Y ′ ), then it correctly establishes that the currentgraph G i is a ψ ′′ -expander, for ψ ′′ = ψ ′ (log n ) c r ≥ ψc (log n ) c r , if we choose c to be at least c . This affects our analysis in two ways. First, we need to bound the number of iterationsdifferently – it is now bounded by O ( z ). However, since z ∗ ≤ z ≤ (log n ) O ( r ) , and sothe number of iterations is bounded by (log n ) O ( r ) as before, and the running time remains O (cid:16) m O (1 /r )+ o (1) · (log n ) O ( r ) (cid:17) . Second, if, in the last iteration, the algorithm from Lemma 7.6establishes that graph G i is a ψ ′′ -expander, then we obtain the cut ( X, Y ) as before, but nowwe get the stronger guarantee that ˆ G [ Y ] is a ψc (log n ) c r -expander.We are now ready to complete the proof of Theorem 7.5. Our algorithm will consist of at most r iterations and uses the following parameters. First, we set z = | V ( ˆ G ) | / m , and for 1 < i ≤ r ,we set z i = z i − /m /r ; in particular, z r = 1 holds. We also define parameters ψ , . . . , ψ r , byletting ψ r = φ , and, for all 1 ≤ i < r , setting ψ i = 8 c (log | V ( ˆ G ) | ) c r · ψ i +1 , where c is theconstant from Lemma 7.6. Notice that ψ ≤ φ · (log m ) O ( r ) .In the first iteration, we apply Lemma 7.6 to the set V ′ = V ( ˆ G ) of vertices, with the parameters ψ = ψ , z = z , and z ′ = z . Clearly, for every partition ( A, B ) of V ′ with | E ˆ G ( A, B ) | ≤ ψ · min {| A | , | B |} , min {| A | , | B |} ≤ z = m/ X, Y ) of V ( ˆ G ), where X, Y are canonical subsets of V ( ˆ G ), | X | , | Y | ≥ | V ( ˆ G ) | / ≥ Vol( G ) / 3, and | E ˆ G ( X, Y ) | ≤ ψ · min {| X | , | Y |} . Let ( A, B ) be thepartition of V ( G ), defined as follows: for every vertex v i ∈ V ( G ), we add v i to A if V i ⊆ X , and weadd it to B otherwise. Clearly, Vol G ( A ) = | X | ≥ Vol( G ) / 3, and similarly, Vol G ( B ) ≥ Vol( G ) / | E G ( A, B ) | = | E ˆ G ( X, Y ) | ≤ ψ · min {| X | , | Y |} ≤ φ · (log m ) O ( r ) Vol( G ). We thenreturn the cut ( A, B ) and terminate the algorithm.We assume from now on that the algorithm from Lemma 7.6 returned a partition ( X, Y ) of V ( ˆ G ), where both X, Y are canonical subsets of V ( ˆ G ), | X | ≤ | Y | (where possibly X = ∅ ), | E ˆ G ( X, Y ) | ≤ ψ · | X | , and the following guarantee holds: For every partition ( A ′ , B ′ ) of the set Y of vertices with | E ˆ G ( A ′ , B ′ ) | ≤ ψ · min {| A ′ | , | B ′ |} , min {| A ′ | , | B ′ |} ≤ z must hold. We set S = X , and we let ˆ G = ˆ G \ S .The remainder of the algorithm consists of r − i th iteration is asubgraph ˆ G i ⊆ ˆ G , containing at least half the vertices of ˆ G , such that for every cut ( A ′ , B ′ ) ofˆ G i with | E ˆ G ( A ′ , B ′ ) | ≤ ψ i · min {| A ′ | , | B ′ |} , min {| A ′ | , | B ′ |} ≤ z i must hold. (Observe that, asestablished above, this condition holds for graph ˆ G ). The output is a canonical subset S i ⊆ V ( ˆ G i ) of vertices, such that | E ˆ G i ( S i , V ( ˆ G i ) \ S i ) | ≤ ψ i ·| S i | , and, if we set ˆ G i +1 = ˆ G i \ S i , then weare guaranteed that for every cut ( A ′′ , B ′′ ) of ˆ G i +1 with | E ˆ G ( A ′′ , B ′′ ) | ≤ ψ i +1 · min {| A ′′ | , | B ′′ |} ,min {| A ′′ | , | B ′′ |} ≤ z i +1 holds. In particular, if | E ˆ G ( A ′′ , B ′′ ) | ≤ ψ i +1 · min {| A ′′ | , | B ′′ |} , thenmin {| A ′′ | , | B ′′ |} ≤ z i +1 holds. In order to execute the i th iteration, we simply apply Lemma 7.6to the set V ′ = V ( ˆ G i ) of vertices, with parameters ψ = ψ i , z = z i and z ′ = z i +1 . As we showlater, we will ensure that | V ( ˆ G i ) | ≥ | V ( ˆ G ) | / ≥ m . Since, for i > z i < m/ ≤ | V ( ˆ G i ) | / 3, theoutcome of the lemma must be a partition ( X, Y ) of V ′ , where both X, Y are canonical subsetsof V ( ˆ G ), | X | ≤ | Y | (where possibly X = ∅ ), | E ˆ G ( X, Y ) | ≤ ψ i · | X | , and we are guaranteed that,for every partition ( A ′′ , B ′′ ) of the set Y of vertices with | E ˆ G ( A ′′ , B ′′ ) | ≤ ψ i +1 · min {| A ′′ | , | B ′′ |} ,min {| A ′ | , | B ′ |} ≤ z i +1 holds. Therefore, we can simply set S i = X , ˆ G i +1 = ˆ G i \ S i , and continueto the next iteration, provided that | ˆ G i +1 | ≥ | V ( ˆ G ) | / | V ( ˆ G i ) | ≥ | V ( ˆ G ) | / 2. Indeed, recall that for all2 ≤ i ′ ≤ i , we guarantee that | E ˆ G i ′ ( S i ′ , V ( ˆ G i ′ ) \ S i ′ ) | ≤ ψ i ′ ·| S i ′ | ≤ ψ ·| S i ′ | . Therefore, if we denoteby Z = S ii ′ =2 S i ′ and Z ′ = V ( ˆ G ) \ Z , then | E ˆ G ( Z, Z ′ ) | ≤ ψ | Z | . Since | V ( ˆ G i ) | ≥ | V ( ˆ G ) | / | Z ′ | ≥ | V ( ˆ G ) | / | V ( ˆ G i +1 ) | = | Z ′ | < | V ( ˆ G ) | / 2. Then | Z ′ | ≥ | Z | / 4, as | Z | ≤ | V ( ˆ G ) | and | Z ′ | ≥ | V ( ˆ G ) | / 4. Therefore, | E ˆ G ( Z, Z ′ ) | ≤ ψ | Z | ≤ ψ | Z ′ | ≤ ψ min {| Z | , | Z ′ |} . We have thus obtained a cut ( Z, Z ′ ) of ˆ G , of sparsity less than 8 ψ , suchthat | Z | , | Z ′ | > z , contradicting the fact that such a cut does not exist.We continue the algorithm until we reach the last iteration, where z r = 1 holds. When we applyLemma 7.6 to the final graph ˆ G r , we obtain a partition ( X, Y ) of V ( G r ), such that graph ˆ G [ Y ] isguaranteed to be a ψ r -expander (recall that ψ r = φ ). We let B ′ = Y and A ′ = V ( ˆ G ) \ B ′ . Usingthe same reasoning as before, we are guaranteed that | B ′ | ≥ | V ( ˆ G ) | / 2, and that | E ˆ G ( A ′ , B ′ ) | ≤ ψ · | A ′ | ≤ φ · (log m ) O ( r ) · Vol( G ). As discussed above, we are guaranteed that graph ˆ G [ B ′ ] is a φ -expander. Next, we define a cut ( A ′ , B ′ ) in graph G , as follows. For every vertex v i ∈ V ( G ),we add v i to A if V i ⊆ X , and we add it to B otherwise. Clearly, Vol G ( A ) = | A ′ | , and similarly,Vol G ( B ) = | B ′ | ≥ Vol( G ) / 2. Moreover, | E G ( A, B ) | = | E ˆ G ( A ′ , B ′ ) | ≤ φ · (log m ) O ( r ) · Vol( G ).Since graph ˆ G [ B ′ ] is a ψ -expander, it is immediate to verify that graph G [ A ] has conductanceat least φ .For all 1 ≤ i ≤ r , the running time of the i th iteration is O (cid:16) z i z i +1 · m O (1 /r )+ o (1) · (log n ) O ( r ) (cid:17) = O (cid:16) m O (1 /r )+ o (1) · (log n ) O ( r ) (cid:17) , and the total running time is O (cid:16) m O (1 /r )+ o (1) · r · (log n ) O ( r ) (cid:17) = O (cid:16) m O (1 /r )+ o (1) · (log n ) O ( r ) (cid:17) . This concludes the proof of Theorem 7.5. As observed already, Lemma 7.4 immediately gives a deterministic (log n ) r -approximation al-gorithm for the Sparsest Cut problem on an n -vertex m -edge graph G , with running time O (cid:16) m O (1 /r )+ o (1) · (log n ) O ( r ) (cid:17) , for all r ≤ O (log n ), proving Theorem 1.4 for the Sparsest Cutproblem.We now show that we can obtain an algorithm with similar guarantees for the Lowest Con-ductance Cut problem. Let G = ( V, E ) be an input to the Lowest-Conductance Cut problem,with | V | = n and | E | = m , and let φ = Φ( G ). We can assume without loss of generalitythat φ < / ( c (log n ) r ) for some large enough constant c , since otherwise we can let v be alowest-degree vertex in G , and return the cut ( { v } , V \ { v } ), whose conductance is 1. We useAlgorithm ReduceDegree from Section 5.2, in order to construct, in time O ( m ), a graph ˆ G ,whose maximum vertex degree is bounded by 10, and | V ( ˆ G ) | = 2 m . Note that, if we denote ψ = Ψ( ˆ G ), then ψ ≤ φ must hold. This is since every cut ( A, B ) in G naturally defines a cut( A ′ , B ′ ) in ˆ G , with | A ′ | = Vol G ( A ) , | B ′ | = Vol G ( B ), and | E ˆ G ( A ′ , B ′ ) | = | E G ( A, B ) | . We use ourapproximation algorithm for the Sparsest Cut problem in graph ˆ G , to obtain a cut ( X ′ , Y ′ ) of ˆ G with Ψ ˆ G ( X ′ , Y ′ ) ≤ (log n ) r · ψ ≤ (log n ) r · φ , in time O (cid:16) m O (1 /r )+ o (1) · (log n ) O ( r ) (cid:17) . Using Al-gorithm MakeCanonical from Lemma 5.4, we obtain a cut ( X ′′ , Y ′′ ) of ˆ G , with | X ′′ | ≥ | X ′ | / Y ′′ | ≥ | Y ′ | / 2, and | E ˆ G ( X ′′ , Y ′′ ) | ≤ O ( | E ˆ G ( X ′ , Y ′ ) | ) ≤ O ((log n ) r · φ ), such that both X ′′ and Y ′′ are canonical vertex sets. This cut naturally defines a cut ( X, Y ) in G , with Vol G ( X ) = | X ′′ | ,Vol G ( Y ) = | Y ′′ | , and | E G ( X, Y ) | = | E ˆ G ( X ′′ , Y ′′ ) | . Therefore, Φ G ( X, Y ) ≤ O ((log n ) r · φ ), andthe running time of the algorithm is O (cid:16) m O (1 /r )+ o (1) · (log n ) O ( r ) (cid:17) . Expander Decomposition Observe that Theorem 1.2 immediately implies an almost-linear time algorithm for computing( ǫ, φ )-expander decomposition even for very small ǫ and φ . The proof of the following corollaryis almost identical to that of Corollary 6.1 and is omitted here. Corollary 7.7. There is a deterministic algorithm that, given a graph G = ( V, E ) with m edgesand parameters ǫ ∈ (0 , and ≤ r ≤ O (log m ) , computes a ( ǫ, φ ) -expander decomposition of G with φ = Ω( ǫ/ (log m ) O ( r ) ) in time O (cid:16) m O (1 /r )+ o (1) · (log m ) O ( r ) (cid:17) . A very interesting remaining open problem is to obtain deterministic algorithms for MinimumBalanced Cut, Sparsest Cut and Lowest-Conductance Cut, that achieve a polylogarithmic ap-proximation ratio, with running time O ( m o (1) ). It would also be interesting to obtain de-terministic n o (1) -approximation algorithms for these problems with running time ˜ O ( m ). Thelatter result would imply a near-linear time deterministic algorithm for computing an expanderdecompositions, matching the performance of the best current randomized algorithm of [SW19]).It is typically desirable for dynamic graph algorithms to have polylogarithmic update time com-plexity. Our result for dynamic connectivity (Theorem 1.5) only guarantees n o (1) update time.Designing a deterministic algorithm with polylogarithmic update time for dynamic connectivityremains a major open problem. In fact, it is already very interesting to achieve such boundswith a Las Vegas randomized algorithm. It is also very interesting to design a Monte Carlorandomized algorithm for maintaining a spanning forest in polylogarithmic update time thatdoes not need the so-called oblivious adversary assumption . We remark that even if one canimplement an algorithm for Theorem 1.2 with running time ˜ O ( m ) time and approximation fac-tor O (polylog n ), this would not immediately imply any of the above goals. The reason is thatthere are several components in the algorithm of Nanongkai et al. [NSW17] that each incur the n o (1) factor in the update time.Our deterministic algorithm for spectral sparsifiers from Corollary 6.4 only achieves a factor n o (1) -approximation. It is an intriguing open question whether (1 + ǫ )-approximate cut/spectralsparsifiers can be computed deterministically in almost-linear time. It is also interesting whetherthere is a deterministic O ( √ log n )-approximation algorithm for Lowest-Conductance Cut, whoserunning time matches that of the best currently known randomized algorithm, which is O ( m ǫ ),for an arbitrarily small constant ǫ > It was shown by Kapron et al. [KKM13], that a spanning forest can be maintained in polylogarithmic updatetime by a Monte Carlo randomized algorithm under the oblivious adversary assumption. cknowledgements This project has received funding from the European Research Council (ERC) under the Eu-ropean Union’s Horizon 2020 research and innovation programme under grant agreement No715672. Nanongkai was also partially supported by the Swedish Research Council (Reg. No.2015-04659). Chuzhoy was supported in part by NSF grant CCF-1616584. Gao and Peng weresupported in part by NSF grant CCF-1718533. A Proof of Theorem 2.5 In this section we prove Theorem 2.5. The proof is practically identical to that in [KKOV07],but, since the algorithm is slightly different, we present it here for completeness. We denote by H i the graph H obtained after i iterations of the cut-matching game. Therefore, graph H has aset V of vertices and no edges, and for all i , graph H i is defined over the same set V of vertices,while the set E ( H i ) of edges is the union of i matchings M , . . . , M i , where for 1 ≤ i ′ ≤ i ,matching M i ′ is a perfect matching between two equal-cardinality subsets A i ′ , B i ′ of V . Noticethat for every vertex v ∈ V , for all 1 ≤ i ′ ≤ i , there is exactly one edge of M i ′ that is incidentto v .Consider a random walk in graph H i that starts at an arbitrary vertex v = v . For all 1 ≤ i ′ ≤ i ,at step i ′ , with probability 1 / 2, the random walk stays at the current vertex v i ′ − (so v i ′ = v i ′ − ),and with probability 1 / v i ′ that is connected to v i ′ − with anedge of M i ′ . We denote by p i ′ ( v, u ) the probability that the random walk that starts at v islocated at vertex u after i ′ steps.For a vertex v ∈ V and index i , we define the potential Φ i ( v ) = P u ∈ V p i ( v, u ) log(1 /p i ( v, u )).In other words, Φ i ( v ) is the entropy of the distribution { p i ( v, u ) } u ∈ V . Clearly, Φ ( v ) = 0, andfor all i , Φ i ( v ) ≤ log n . Finally, we define the total potential at the end of iteration i :Φ i = X v ∈ V Φ i ( v ) . From the above discussion, Φ = 0, and for all i , Φ i ≤ O ( n log n ).In order to complete the proof of Theorem 2.5, it is sufficient to prove the following claim. Claim A.1. Let i be any iteration in which the cut player computed a partition ( A i , B i ) of V ( H i − ) with | B i | ≥ | A i | ≥ n/ and | E H i − ( A i , B i ) | ≤ n/ . Let ( A ′ i , B ′ i ) be any partitionof V into two equal-cardinality subsets such that A i ⊆ A ′ i , and let M i be any perfect matchingbetween A ′ i and B ′ i . Let H i be the graph obtained from H i − by adding the edges of M i to it.Then Φ i ≥ Φ i − + Ω( n ) . Since the initial potential Φ = 0, and the potential increases by Ω( n ) in every iteration, thenumber of iterations is bounded by O (log n ), as the potential may never exceed O ( n log n ). Itnow remains to prove Claim A.1. The proof is almost identical to that in [KKOV07]. Proof of Claim A.1. For convenience, we denote E ′ = E H i − ( A i , B i ). Notice that | E ′ | ≤ n/ ≤ | A i | / 25 must hold. For a vertex v ∈ V and a subset Y ⊆ V of vertices, we let73 ( v, Y ) = P u ∈ Y p i − ( v, u ) be the probability that the random walk that we defined above islocated at a vertex of Y at the end of the ( i − v . Similarly, for twodisjoint subsets X, Y of vertices of V , we denote by P ( X, Y ) = P v ∈ X P ( v, Y ).Consider now the following experiment. We place one unit of mass on every vertex v ∈ A i , andthen perform ( i − 1) iterations. For all 1 ≤ i ′ < i , in order to perform iteration i ′ , we considerevery vertex a ∈ V and the mass µ ( a ) that is currently located at vertex a . We keep half of thismass at vertex a , and the remaining half of the mass is moved to the unique vertex b ∈ V suchthat ( a, b ) ∈ M i ′ .It is easy to verify (using induction) that, over the course of this experiment, at every time step,the amount of mass at every given vertex a ∈ V is at most 1, and moreover, at most 1 unit ofmass is moved across any edge in every iteration. Notice that P ( A i , B i ) is precisely the amountof mass that is located at the vertices of B i after the end of the ( i − e ∈ E ′ . There must be a unique index 1 ≤ i ′ < i such that e ∈ M i ′ (we consider parallel edges as separate edges). Mass can be transferred along the edge e onlyin iteration i ′ , and only one unit of mass can be transferred across it then. Therefore, the totalamount of mass that is located at the vertices of B i at the end of iteration ( i − 1) is at most | E ′ | ≤ | A i | / 25. Equivalently, P ( A i , B i ) ≤ | A i | / a ∈ A i is interesting iff P ( a, B i ) ≤ / 4. Then at least | A i | / A are interesting. Indeed, otherwise, we have | A i | / a ∈ A i with P ( a, B i ) > / 4, so P ( A i , B i ) ≥ | A i | / a ∈ A i . Recall that P ( a, B i ) ≤ / 4, and, therefore, P ( a, A i ) ≥ / M i , and some matched pair e = ( u, v ) ∈ M i with u ∈ A i , v A i . Denote p = p i − ( a, u ), and q = p i − ( a, v ). We define the weight of the e with respect to a be w a ( e ) = p . Note that P e ∈ M i w a ( e ) = P ( a, A ) ≥ / 4. We say that e isa good edge with respect to a iff p ≥ q . Let E ′ ( a ) ⊆ M i be the set of all edges that are goodwith respect to a , and let E ′′ ( a ) = M i \ E ′ ( a ). Claim A.2. For every interesting vertex a , P e ∈ E ′ ( a ) w a ( e ) ≥ / .Proof. Note that: X e ∈ E ′′ ( a ) w a ( e ) = X e =( v,u ) ∈ E ′′ ( A ) p i − ( a, u ) ≤ X e =( u,v ) ∈ E ′′ ( a ) p i − ( a, v ) ≤ P ( a, B ) ≤ / , while P ( a, A ) ≥ / 4, so P e ∈ E ′ ( a ) w a ( e ) ≥ P ( a, A ) − P e ∈ E ′′ ( a ) w a ( e ) ≥ / u, v ) ∈ M i that is good with respect to an interesting vertex a , with thecorresponding probabilities p and q , then pairs ( a, u ) and ( a, v ) originally contribute p log (cid:16) p (cid:17) + q log (cid:16) q (cid:17) to the potential Φ i − ( a ), and will contribute ( p + q ) log (cid:16) p + q (cid:17) to Φ i ( a ), since p i ( a, u ) =74 i ( a, v ) = p + q . The key claim is that the increase in the potential due to these pairs is at leastΩ( p ): Claim A.3. Let a is an interesting vertex, and let ( u, v ) ∈ M i be a good edge for a , with u ∈ A i .Denote p = p i − ( a, u ) and q = p i − ( a, v ) . Then: ( p + q ) log (cid:18) p + q (cid:19) − p log (cid:18) p (cid:19) − q log (cid:18) q (cid:19) ≥ Ω( p ) . If the above claim is correct, then for each interesting vertex a , we get that Φ i ( a ) − Φ i − ( a ) ≥ P e ∈ E ′ ( a ) Ω( w a ( e )) ≥ Ω(1). Since the number of interesting vertices a ∈ A i is Ω( n ), we get thatΦ i − Φ i − ≥ Ω( n ).It now remains to prove the claim. Denote S = ( p + q ) log (cid:16) p + q (cid:17) − p log (cid:16) p (cid:17) − q log (cid:16) q (cid:17) . Byregrouping the terms, we can write: S = p log (cid:18) pp + q (cid:19) + q log (cid:18) qp + q (cid:19) . Denoting q = αp , for some 0 < α ≤ / 2, it is now enough to show that there is some constant c > α , such that:log (cid:18) 21 + α (cid:19) + α log (cid:18) α α (cid:19) ≥ c. Rewriting log (cid:16) α (cid:17) = log (cid:16) − α α (cid:17) and log (cid:16) α α (cid:17) = log (cid:16) − − α α (cid:17) , and using Taylor expan-sion for ln(1 + ǫ ) completes the proof. References [ABN08] Ittai Abraham, Yair Bartal, and Ofer Neiman. Nearly tight low stretch spanningtrees. In ,pages 781–790. IEEE, 2008. 54[ACL07] Reid Andersen, Fan R. K. Chung, and Kevin J. Lang. Using pagerank to locallypartition a graph. Internet Mathematics , 4(1):35–64, 2007. 2, 4, 8[AHK10] Sanjeev Arora, Elad Hazan, and Satyen Kale. O ( √ log n ) approximation to SPARS-EST CUT in e O ( n ) time. SIAM J. Comput. , 39(5):1748–1771, 2010. 1, 14[Alo86] Noga Alon. Eigenvalues and expanders. Combinatorica , 6(2):83–96, 1986. 4, 48[ALO15] Zeyuan Allen Zhu, Zhenyu Liao, and Lorenzo Orecchia. Spectral sparsification andregret minimization beyond matrix multiplicative updates. In Proceedings of theForty-Seventh Annual ACM on Symposium on Theory of Computing, STOC 2015,Portland, OR, USA, June 14-17, 2015 , pages 237–245, 2015. 4675AN12] Ittai Abraham and Ofer Neiman. Using petal-decompositions to build a low stretchspanning tree. In Proceedings of the 44th Symposium on Theory of Computing Con-ference, STOC 2012, New York, NY, USA, May 19 - 22, 2012 , pages 395–406, 2012.54[ARV09] Sanjeev Arora, Satish Rao, and Umesh V. Vazirani. Expander flows, geometricembeddings and graph partitioning. J. ACM , 56(2):5:1–5:37, 2009. 4, 14[BK02] Andras Benczur and David R. Karger. Randomized approximation schemes for cutsand flows in capacitated graphs. 2002. 53[BSS12] Joshua Batson, Daniel A Spielman, and Nikhil Srivastava. Twice-Ramanujan spar-sifiers. SIAM Journal on Computing , 41(6):1704–1721, 2012. 5, 46[CC13] Chandra Chekuri and Julia Chuzhoy. Large-treewidth graph decompositions andapplications. In Symposium on Theory of Computing Conference, STOC’13, PaloAlto, CA, USA, June 1-4, 2013 , pages 291–300, 2013. 7[CC16] Chandra Chekuri and Julia Chuzhoy. Polynomial bounds for the grid-minor theorem. J. ACM , 63(5):40:1–40:65, 2016. 7[CGP + 18] Timothy Chu, Yu Gao, Richard Peng, Sushant Sachdeva, Saurabh Sawlani, andJunxing Wang. Graph sparsification, spectral sketches, and faster resistance com-putation, via short cycle decompositions. In ,pages 361–372, 2018. 9[CK19] Julia Chuzhoy and Sanjeev Khanna. A new algorithm for decremental single-sourceshortest paths with applications to vertex-capacitated flow and cut problems. In STOC , pages 389–400. ACM, 2019. 1, 8, 10, 11, 16, 17, 65[CL16] Julia Chuzhoy and Shi Li. A polylogarithmic approximation algorithm for edge-disjoint paths with congestion 2. J. ACM , 63(5):45:1–45:51, 2016. 7[CMSV17] Michael B. Cohen, Aleksander Madry, Piotr Sankowski, and Adrian Vladu. Negative-weight shortest paths and unit capacity minimum cost flow in e O ( m / log W ) time(extended abstract). In SODA , pages 752–771. SIAM, 2017. 1, 49, 50[CS19] Yi-Jun Chang and Thatchaphol Saranurak. Improved distributed expander decom-position and nearly optimal triangle enumeration. In Proceedings of the 2019 ACMSymposium on Principles of Distributed Computing, PODC 2019, Toronto, ON,Canada, July 29 - August 2, 2019. , pages 66–73, 2019. 11, 65[dCSHS16] Marcel Kenji de Carli Silva, Nicholas J. A. Harvey, and Cristiane M. Sato. Sparsesums of positive semidefinite matrices. ACM Trans. Algorithms , 12(1):9:1–9:17,2016. 5[Din06] Yefim Dinitz. Dinitz’ algorithm: The original version and Even’s version. In Theo-retical computer science , pages 218–240. Springer, 2006. 1776DS08] Samuel I. Daitch and Daniel A. Spielman. Faster approximate lossy generalized flowvia interior point algorithms. In Proceedings of the 40th annual ACM symposium onTheory of computing , STOC ’08, pages 451–460, New York, NY, USA, 2008. ACM.Available at http://arxiv.org/abs/0803.0988. 49, 50[EGIN97] David Eppstein, Zvi Galil, Giuseppe F. Italiano, and Amnon Nissenzweig. Sparsifica-tion - a technique for speeding up dynamic graph algorithms. J. ACM , 44(5):669–696,1997. 2, 5, 6, 41[ES81] Shimon Even and Yossi Shiloach. An on-line edge-deletion problem. Journal of theACM (JACM) , 28(1):1–4, 1981. 10, 17[Fle00] Lisa Fleischer. Approximating fractional multicommodity flow independent of thenumber of commodities. SIAM J. Discrete Math. , 13(4):505–520, 2000. announcedat FOCS’99. 4[Fre85] Greg N. Frederickson. Data structures for on-line updating of minimum spanningtrees, with applications. SIAM J. Comput. , 14(4):781–798, 1985. Announced atSTOC’83. 2, 5, 6, 41[GG81] Ofer Gabber and Zvi Galil. Explicit constructions of linear-sized superconcentrators. J. Comput. Syst. Sci. , 22(3):407–420, 1981. announced at FOCS’79. 12[GKKT15] David Gibb, Bruce M. Kapron, Valerie King, and Nolan Thorn. Dynamic graphconnectivity with improved worst case update time and sublinear space. CoRR ,abs/1509.06464, 2015. 5, 6[GLN + 19] Yu Gao, Jason Li, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, andSorrachai Yingchareonthawornchai. Deterministic graph cuts in subquadratic time:Sparse, balanced, and k-vertex. arXiv preprint arXiv:1910.07950 , 2019. 2, 4, 5, 7,8, 9, 40[GR98] Andrew V. Goldberg and Satish Rao. Beyond the flow decomposition barrier. J.ACM , 45(5):783–797, 1998. 5[GR99] Oded Goldreich and Dana Ron. A sublinear bipartiteness tester for bounded degreegraphs. Combinatorica , 19(3):335–373, 1999. 40[HdLT01] Jacob Holm, Kristian de Lichtenberg, and Mikkel Thorup. Poly-logarithmic deter-ministic fully-dynamic algorithms for connectivity, minimum spanning tree, 2-edge,and biconnectivity. J. ACM , 48(4):723–760, 2001. Announced at STOC 1998. 6, 43,45[HHKP17] Shang-En Huang, Dawei Huang, Tsvi Kopelowitz, and Seth Pettie. Fully dynamicconnectivity in o (log n (log log n ) ) amortized expected time. In SODA , 2017. 6[HK97] Monika Rauch Henzinger and Valerie King. Maintaining minimum spanning treesin dynamic graphs. In ICALP , volume 1256 of Lecture Notes in Computer Science ,pages 594–604. Springer, 1997. 6 77HK99] Monika Rauch Henzinger and Valerie King. Randomized fully dynamic graph al-gorithms with polylogarithmic time per operation. J. ACM , 46(4):502–516, 1999.Announced at STOC 1995. 6[HRW17] Monika Henzinger, Satish Rao, and Di Wang. Local flow partitioning for faster edgeconnectivity. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposiumon Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January16-19 , pages 1919–1938, 2017. 20, 21[HT97] Monika Rauch Henzinger and Mikkel Thorup. Sampling to provide or to bound:With applications to fully dynamic graph algorithms. Random Struct. Algorithms ,11(4):369–379, 1997. 6[Kar08] George Karakostas. Faster approximation schemes for fractional multicommodityflow problems. ACM Trans. Algorithms , 4(1):13:1–13:17, 2008. 4[Kin08] Valerie King. Fully dynamic connectivity. In Encyclopedia of Algorithms . Springer,2008. 6[Kin16] Valerie King. Fully dynamic connectivity. In Encyclopedia of Algorithms , pages792–793. 2016. 6[KKM13] Bruce M. Kapron, Valerie King, and Ben Mountjoy. Dynamic graph connectivity inpolylogarithmic worst case time. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2013, New Orleans, Louisiana,USA, January 6-8, 2013 , pages 1131–1142, 2013. 5, 6, 72[KKOV07] Rohit Khandekar, Subhash Khot, Lorenzo Orecchia, and Nisheeth K Vishnoi. Ona cut-matching game for the sparsest cut problem. Univ. California, Berkeley, CA,USA, Tech. Rep. UCB/EECS-2007-177 , 2007. 7, 8, 13, 14, 73[KKPT16] Casper Kejlberg-Rasmussen, Tsvi Kopelowitz, Seth Pettie, and Mikkel Thorup.Faster worst case deterministic dynamic connectivity. In , pages53:1–53:15, 2016. 5, 6, 41[KLOS14] Jonathan A. Kelner, Yin Tat Lee, Lorenzo Orecchia, and Aaron Sidford. An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multi-commodity generalizations. In Proceedings of the Twenty-Fifth Annual ACM-SIAMSymposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January5-7, 2014 , pages 217–226, 2014. 1, 5, 50[KLP + 16] Rasmus Kyng, Yin Tat Lee, Richard Peng, Sushant Sachdeva, and Daniel A. Spiel-man. Sparsified cholesky and multigrid solvers for connection laplacians. In Pro-ceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing,STOC 2016, Cambridge, MA, USA, June 18-21, 2016 , pages 842–850, 2016. 46, 48[KP15] Donggu Kang and James Payor. Flow rounding. CoRR , abs/1507.08139, 2015. 6178KPSW19] Rasmus Kyng, Richard Peng, Sushant Sachdeva, and Di Wang. Flows in almostlinear time via adaptive preconditioning. In Proceedings of the 51st Annual ACMSIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA,June 23-26, 2019. , pages 902–913, 2019. 1[KRV09] Rohit Khandekar, Satish Rao, and Umesh V. Vazirani. Graph partitioning usingsingle commodity flows. J. ACM , 56(4):19:1–19:15, 2009. 1, 4, 5, 7, 8, 10, 13, 14[KS16] Rasmus Kyng and Sushant Sachdeva. Approximate gaussian elimination for lapla-cians - fast, sparse, and simple. In IEEE 57th Annual Symposium on Foundations ofComputer Science, FOCS 2016, 9-11 October 2016, Hyatt Regency, New Brunswick,New Jersey, USA , pages 573–582, 2016. 49[KT19] Ken-ichi Kawarabayashi and Mikkel Thorup. Deterministic edge connectivity innear-linear time. J. ACM , 66(1):4:1–4:50, 2019. 1[KVV04] Ravi Kannan, Santosh Vempala, and Adrian Vetta. On clusterings: Good, bad andspectral. J. ACM , 51(3):497–515, 2004. 40[LR99] Frank Thomson Leighton and Satish Rao. Multicommodity max-flow min-cut theo-rems and their use in designing approximation algorithms. J. ACM , 46(6):787–832,1999. 14, 57[LS17] Yin Tat Lee and He Sun. An sdp-based algorithm for linear-sized spectral sparsifi-cation. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory ofComputing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017 , pages 678–687,2017. 5[LSY19] Yang P. Liu, Sushant Sachdeva, and Zejun Yu. Short cycles via low-diameter de-compositions. In SODA , pages 2602–2615. SIAM, 2019. 9[Mad10a] Aleksander Madry. Fast approximation algorithms for cut-based problems in undi-rected graphs. In FOCS , pages 245–254. IEEE Computer Society, 2010. 9, 52, 53,54[Mad10b] Aleksander Madry. Faster approximation schemes for fractional multicommodityflow problems via dynamic graph algorithms. In Proceedings of the 42nd ACMSymposium on Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA,5-8 June 2010 , pages 121–130, 2010. 4, 5, 49[Mad13] Aleksander Madry. Navigating central path with electrical flows: From flowsto matchings, and back. In Foundations of Computer Science (FOCS), 2013IEEE 54th Annual Symposium on , pages 253–262. IEEE, 2013. Available athttp://arxiv.org/abs/1307.2205. 50[Mad16] Aleksander Madry. Computing maximum flow with augmenting electrical flows. In FOCS , pages 593–602. IEEE Computer Society, 2016. 1, 49, 5079Mar73] G. A. Margulis. Explicit construction of concentrators. Problemy Peredafi Iqfi-wmacii , 9(4):71–80, 1973. (English translation in Problems Inform. Transmission(1975)). 12[NS17] Danupon Nanongkai and Thatchaphol Saranurak. Dynamic spanning forest withworst-case update time: adaptive, Las Vegas, and O ( n / − ǫ )-time. In Proceedings ofthe 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017,Montreal, QC, Canada, June 19-23, 2017 , pages 1122–1129, 2017. 1, 4, 5, 6, 7, 11,40, 43, 59, 65[NSW17] Danupon Nanongkai, Thatchaphol Saranurak, and Christian Wulff-Nilsen. Dynamicminimum spanning forest with subpolynomial worst-case update time. In FOCS ,pages 950–961. IEEE Computer Society, 2017. 1, 4, 5, 6, 21, 42, 72[OA14] Lorenzo Orecchia and Zeyuan Allen Zhu. Flow-based algorithms for local graphclustering. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium onDiscrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014 , pages1267–1286, 2014. 20[OSV12] Lorenzo Orecchia, Sushant Sachdeva, and Nisheeth K. Vishnoi. Approximating theexponential, the lanczos method and an ˜o( m )-time spectral algorithm for balancedseparator. In Proceedings of the 44th Symposium on Theory of Computing Confer-ence, STOC 2012, New York, NY, USA, May 19 - 22, 2012 , pages 1141–1160, 2012.4[OV11] Lorenzo Orecchia and Nisheeth K. Vishnoi. Towards an sdp-based approach tospectral methods: A nearly-linear-time algorithm for graph partitioning and decom-position. In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium onDiscrete Algorithms, SODA 2011, San Francisco, California, USA, January 23-25,2011 , pages 532–545, 2011. 4[PD06] Mihai Patrascu and Erik D. Demaine. Logarithmic lower bounds in the cell-probemodel. SIAM J. Comput. , 35(4):932–963, 2006. Announced at SODA’04 andSTOC’04. 6[Pen16] Richard Peng. Approximate undirected maximum flows in O ( m poly log( n )) time.In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on DiscreteAlgorithms, SODA 2016, Arlington, VA, USA, January 10-12, 2016 , pages 1862–1867, 2016. 1, 50[PT07] Mihai Patrascu and Mikkel Thorup. Planning for fast connectivity updates. In FOCS , pages 263–271. IEEE Computer Society, 2007. 6[R¨ac02] Harald R¨acke. Minimizing congestion in general networks. In , pages 43–52, 2002. 1[RST14] Harald R¨acke, Chintan Shah, and Hanjo T¨aubig. Computing cut-based hierarchicaldecompositions in almost linear time. In Proceedings of the Twenty-Fifth Annual CM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon,USA, January 5-7, 2014 , pages 227–238, 2014. 1, 7[She09] Jonah Sherman. Breaking the multicommodity flow barrier for O ( √ log n )-approximations to sparsest cut. In , pages363–372, 2009. 14, 72[She13] Jonah Sherman. Nearly maximum flows in nearly linear time. In FOCS , pages263–269. IEEE Computer Society, 2013. 1, 5, 50, 51, 52, 53[She17] Jonah Sherman. Area-convexity, l ∞ regularization, and undirected multicommodityflow. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory ofComputing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017 , pages 452–460,2017. 5, 50[SS11] D. Spielman and N. Srivastava. Graph sparsification by effective resistances. SIAMJournal on Computing , 40(6):1913–1926, 2011. 46[ST83] Daniel Dominic Sleator and Robert Endre Tarjan. A data structure for dynamictrees. J. Comput. Syst. Sci. , 26(3):362–391, 1983. 22, 41, 60, 61[ST03] Daniel A. Spielman and Shang-Hua Teng. Solving sparse, symmetric, diagonally-dominant linear systems in time 0(m1.31). In , pages 416–427, 2003. 5, 49[ST04] Daniel A. Spielman and Shang-Hua Teng. Nearly-linear time algorithms for graphpartitioning, graph sparsification, and solving linear systems. In STOC , pages 81–90.ACM, 2004. 1, 3, 4, 40[ST11] Daniel A. Spielman and Shang-Hua Teng. Spectral sparsification of graphs. SIAMJ. Comput. , 40(4):981–1025, 2011. 5, 46[ST14] Daniel A. Spielman and Shang-Hua Teng. Nearly linear time algorithms for pre-conditioning and solving symmetric, diagonally dominant linear systems. SIAM J.Matrix Analysis Applications , 35(3):835–885, 2014. 5, 49[SW19] Thatchaphol Saranurak and Di Wang. Expander decomposition and pruning: Faster,stronger, and simpler. In SODA , pages 2616–2635. SIAM, 2019. 1, 4, 5, 7, 8, 10, 11,13, 40, 72[Tho00] Mikkel Thorup. Near-optimal fully-dynamic graph connectivity. In F. Frances Yaoand Eugene M. Luks, editors, Proceedings of the Thirty-Second Annual ACM Sym-posium on Theory of Computing, May 21-23, 2000, Portland, OR, USA , pages 343–350. ACM, 2000. 6[Tre05] Luca Trevisan. Approximation algorithms for unique games. In , pages 197–205. IEEE,2005. 1 81Wul13] Christian Wulff-Nilsen. Faster deterministic fully-dynamic graph connectivity. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Al-gorithms, SODA 2013, New Orleans, Louisiana, USA, January 6-8, 2013 , pages1757–1769, 2013. 6[Wul17] Christian Wulff-Nilsen. Fully-dynamic minimum spanning forest with improvedworst-case update time. In