Deterministic Decremental Reachability, SCC, and Shortest Paths via Directed Expanders and Congestion Balancing
Aaron Bernstein, Maximilian Probst Gutenberg, Thatchaphol Saranurak
aa r X i v : . [ c s . D S ] N ov Deterministic Decremental Reachability, SCC, and Shortest Pathsvia Directed Expanders and Congestion Balancing
Aaron Bernstein ∗ Rutgers [email protected] Maximilian Probst Gutenberg † University of [email protected] SaranurakToyota Technological Institute at [email protected]
Abstract
Let G = ( V, E, w ) be a weighted, directed graph subject to a sequence of adversarial edgedeletions. In the decremental single-source reachability problem (SSR), we are given a fixedsource s and the goal is to maintain a data structure that can answer path-queries s v forany v ∈ V . In the more general single-source shortest paths (SSSP) problem the goal is toreturn an approximate shortest path to v , and in the SCC problem the goal is to maintainstrongly connected components of G and to answer path queries within each component. Allof these problems have been very actively studied over the past two decades, but all the fastalgorithms are randomized and, more significantly, they can only answer path queries if theyassume a weaker model: they assume an oblivious adversary which is not adaptive and mustfix the update sequence in advance. This assumption significantly limits the use of these datastructures, most notably preventing them from being used as subroutines in static algorithms.All the above problems are notoriously difficult in the adaptive setting. In fact, the state-of-the-art is still the Even and Shiloach tree, which dates back all the way to 1981 [ES81] andachieves total update time O ( mn ). We present the first algorithms to break through this barrier:• deterministic decremental SSR/SCC with total update time mn / o (1) • deterministic decremental SSSP with total update time n / o (1) To achieve these results, we develop two general techniques for working with dynamic graphs.The first generalizes expander-based tools to dynamic directed graphs. While these tools havealready proven very successful in undirected graphs, the underlying expander decomposition theyrely on does not exist in directed graphs. We thus need to develop an efficient framework forusing expanders in directed graphs, as well as overcome several technical challenges in processingdirected expanders. We establish several powerful primitives that we hope will pave the way forother expander-based algorithms in directed graphs.The second technique, which we call congestion balancing , provides a new method for main-taining flow under adversarial deletions. The results above use this technique to maintain anembedding of an expander. The technique is quite general, and to highlight its power, we use itto achieve the following additional result:• The first near-optimal algorithm for decremental bipartite matching ∗ This work was done while funded by NSF Award 1942010 and the Simon’s Group for Algorithms & Geometry † The author is supported by Basic Algorithms Research Copenhagen (BARC), supported by Thorup’s InvestigatorGrant from the Villum Foundation under Grant No. 16582 and is supported by a start-up grant of Rasmus Kyng atETH Zurich. ontents
Robust-Witness . . . . . . . . . . . . . . . . . . . . . . . . 155.3 Subroutines Used by Algorithm
Robust-Witness . . . . . . . . . . . . . . . . . . . 155.4 Embedding a Witness that Obeys Edge Capacities . . . . . . . . . . . . . . . . . . . 175.5 Analysis of
Robust-Witness (Algorithm 3) . . . . . . . . . . . . . . . . . . . . . . . 21
CutOrCertify in Directed Graphs . . . . . . . . . . . . . . . . 43
10 Conclusion 5411 Acknowledgements 55A Proofs Omitted From Main Body of Conference Submission 62
A.1 Analysis of Algorithm 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62A.2 Proof of Theorem 1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
B Implementation of Flow Subroutines 64
B.1 Flow Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64B.2 Bounded Height Push-Relabel and Blocking Flow . . . . . . . . . . . . . . . . . . . . 65B.3 The Common Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67B.4 Flow Subroutines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68B.4.1 Local Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68.4.2 Global Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69B.4.3 Flow for Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69B.4.4 Flow for Vertex Cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
C Proof of Proposition 4.1 72D Proof of Theorem 4.4 74E Short-path Oracles on Expanders 76
E.1 Embedding A Small Witness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76E.2 A Recursive Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Introduction
Let G = ( V, E, w ) be a weighted, directed graph that is subject to dynamic updates that changethe edges of G . We consider three closely related problems. In single-source reachability (SSR),we are given a fixed source s , and the goal is to maintain a data structure that can answer pathqueries s v for any v ∈ V . The single-source shortest path problem (SSSP) is a generalizationof SSR where the goal is return an approximate shortest path from s to v . Finally, in dynamicstrongly-connected components (SCC), the goal is to maintain a data structure such that given anytwo vertices u, v ∈ V , it can determine whether they are in the same SCC, i.e. whether u and v are on a common cycle in G , and if yes, can report a path between them in either direction.All three of the above problems have received an enormous amount of attention in the dynamicsetting. The most general model is the fully dynamic one, where each adversarial update can eitherinsert or delete an edge from G . But in this model there are very strong conditional lower boundsfor all the above problems [AW14, HKNS15].For this reason, much of the work on these problems focuses on the weaker decremental model,where the algorithm is given some input graph G = ( V, E, w ), and the adversary deletes one edgeat a time until the graph is empty. Here, results are typically expressed in terms of the total updatetime over the entire sequence of deletions. Let n be the number of vertices in the original inputgraph, m the number of edges. The first algorithm for these problems is the Even and Shiloachtree [ES81] from 1981, which achieves total update time O ( mn ) (amortized O ( n )); See [HK95] fora simple extension to directed graphs. A long line of work has since led to near-optimal algorithmsfor these problems in undirected graphs, including some in the fully dynamic model [Fre85, HK99,HdLT01, Tho00, PD04, NS17, Wul17, NSW17, CGL + O ( m ) for decremental SSR/SCC[HKN14b, HKN15, CHI +
16, IKLS17, BPWN19] and moderate improvements for decremental SSSP:for example, total update time ˜ O ( mn / ) in [GW20] and an extremely recent ˜ O ( n ) result [BGW20].But all of the above o ( mn ) algorithms for directed graphs suffer from a crucial drawback:they are randomized, and more significantly, they are only able to return paths if they assume an oblivious adversary . Such an adversary cannot change its updates based on the algorithm’s answersto path-queries: put otherwise, the adversary must fix its entire update sequence in advance. Muchof the recent work in the field of dynamic graphs as a whole has focused on developing so-calledadaptive algorithms that do not assume an oblivious adversary. This is important for two reasons.Firstly, adaptive algorithms work in a less restrictive model. Secondly, several recent papers haveused dynamic graph algorithms as subroutines within the multiplicative-weight update method tospeed up static algorithms; for example, decremental shortest paths to speed up various (static)flow algorithms [Mad10, CK19, CS20], or incremental min-cut to speed up a TSP algorithm [CQ17].These applications to static algorithms all require adaptive dynamic algorithms.Despite all the progress for non-adaptive algorithms, the fastest adaptive algorithm for all thedirected problems mentioned above remains the Even and Shiloach tree from 1981, which has totalupdate time O ( mn ). In this paper, we present the first algorithms to break through this barrier. Theorem 1.1.
Let G be a directed graph. There exists an algorithm for decremental single-source reachability and decremental strongly connected components (SCC) with total update time mn / o (1) . The SCC algorithm not only explicitly maintains SCCs, but can answer path querieswithin an SCC. The algorithms can, respectively, determine whether a vertex v is reachable from s , or whether two vertices are in the same SCC, in O (1) time. The time to answer a path query is P · n o (1) , where P is the length of the (simple) output path. heorem 1.2. Let G be a directed graph with positive weights and let W be the ratio of thelargest to smallest weight. There exists an algorithm for decremental (1 + ǫ ) -approximate single-source shortest paths with total update time n / o (1) log( W ) /ǫ . (An update can delete an edgeor increase an edge weight.) The query time is O (1) for returning an approximate distance and | P | · n o (1) for an approximate path, where | P | is the length of the (simple) output path. Related Work
Probst Gutenberg and Wulff-Nilsen considered a relaxed version of decrementalSSSP that can only return distance estimates , not an actual path. They showed an adaptive(randomized) algorithm for this problem with total update time ˜ O ( m / n / ) = ˜ O ( n / ) [GW20].The adaptivity of this result crucially depends on the assumption that the adversary cannot see thepaths used by the algorithm, so these results cannot be extended to the problems we are solving inthis paper. Secondly, there are several results (both adaptive and oblivious) on dynamic SSC/SSSPin the incremental setting, where the algorithm starts with an empty graph and edges are inserted one at a time (see e.g. [HKM +
12, BFGT15, BC18, GWW20]). These incremental-only results usea very different set of techniques that do not transfer to the decremental setting.Directed expanders, key objects in this paper, are closely related to the notion of directedtree-width introduced in [Ree99, JRST01], which is a key concept in deep structural statements,including the directed grid-minor theorem [KK15, HKK19] and the directed Erdos-Posa theorem[RRST96, AKKW16, MMP + The approximation algorithm for a variant of the disjoint pathsproblem by [CE15] exploits the directed well-linked decomposition which is related to directedexpander decomposition stated in this paper. However, their technique is static and not concernedwith time-efficiency beyond polynomial time.
Our techniques are mostly very different from those of the earlier randomized algorithms, becausethose crucially relied on “hiding” their choices from an oblivious adversary. Our algorithms insteadrely on expander-based tools. While these have previously been used to break long-standing barriersfor adaptive algorithms in dynamic undirected graphs [NS17, Wul17, NSW17, CK19, CS20], ourpaper is to first to successfully apply them to dynamic algorithms for directed graphs. Our resultsrequire a large number of new techniques; we highlight the most significant ones below.
An efficient framework for directed expanders (Section 4)
Expander-based algorithms inundirected graphs rely on the following basic decomposition: given any graph G = ( V, E ), it ispossible to partition E into sets X and R , such that X is the union of disconnected expanders, and | R | ≪ | E | . The idea is then to use expander-tools on X and deal with the small set R separately.Unfortunately, such a guarantee is not possible for directed graphs: if G is a dense DAG, then R must contain all the edges of G .This paper explicitly shows the following decomposition for directed graphs: E can be parti-tioned into three sets X, D, R such that X is the union of disconnected (directed) expanders, D is acyclic, and | R | ≪ | X | . (We actually use an analogous decomposition for vertex expanders.)We then use this decomposition as the crux of our new framework, which weaves together newfast algorithms for directed expanders with existing fast algorithms for DAGs. We hope that thisframework will pave the way for future work that applies expander-tools to directed graphs. In particular, directed expanders are graphs that contain a large well-linked set [CE15, CEP18] and directedtree-width of a graph is approximated, up to a constant, by the maximum size over all well-linked sets [Ree99]. ongestion-balancing flow (Section 5) One of our main technical contributions is a newapproach to maintaining a large flow in the presence of adversarial edge deletions (it is new toundirected graphs as well). Intuitively, a flow solution is more robust if it spreads out the congestionamong all the edges of the graph. There are, however, two main challenges to formalizing thisintuition. The first is that some edges may be more “crucial” than others, so will necessarily havea higher congestion. The second is that these crucial edges might change over time, whereupon theflow must be rebalanced. We introduce a general approach for efficiently computing the “right”congestion of each edge. We then show that a potential function based on minimum-cost flow allowsus to cleanly analyze the total amount of rebalancing necessary.In our decremental SSR/SCC/SSSP results, we use congestion-balancing flow to maintain anembedding of an expander. But the technique is quite general, and to highlight its power, we use itachieve significantly improved bounds for the seemingly unrelated problem of decremental bipartitematching (see below).
New Primitives for Directed Expanders (Sections 6 and 7)
Our new framework requiresgeneralizing the essential expander primitives to directed graphs. While some of the primitivestransfer almost automatically (e.g. unit flow), others pose significant technical challenge. Wehighlight two in particular.In expander pruning (Section 6) we are given an expander G = ( V, E ) subject to adversarialedge deletions. The goal of pruning is to dynamically maintain a set of pruned vertices P ⊆ V suchthat the induced graph G [ V \ P ] remains an expander. There are two known approaches to pruningin undirected graphs [NSW17, SW19], but both break down in directed graphs because a sparsecut in one direction may not be sparse in the other. Our approach takes inspiration from [NSW17],but requires a different key subroutine to work in directed graphs. In addition to generalizing theresult of [NSW17], our approach also ends up being simpler and cleaner.The cut-matching game (Section 7) is the well-known tool for certifying expansion of graphsand was first introduced in [KRV09]. There are two state-of-the-art variants: one is randomizedbut works in directed graphs [Lou10], while the second recent variant is deterministic but limitedto undirected graphs [CGL + +
20] to work directed graphs.Both our pruning result and our new cut-matching game are stated as black-box results that caneasily be plugged into other algorithms. Given how essential these tools have proven in undirectedgraphs, we think it is likely our contributions will prove useful for future work on directed expanders.
As mentioned above, along the way to our main results we develop improved algorithms for dynamicmatching. Consider the problem of maintaining a (1 − ǫ )-approximate maximum matching in anunweighted dynamic graph. In the fully dynamic setting, although there is a wide literature on fasterupdate times for larger approximations, the best known update time for a (1 − ǫ ) approximationis O ( √ m ) [GP13], and there is evidence that O ( √ m ) is a hard barrier to break through [HKNS15,KPP16]. For this reason, there has been a series of upper and lower bounds in the more relaxedincremental model, where the algorithm starts with an empty graph and edges are only inserted[Dah16, BLSZ14, Gup14, GLS + − ǫ )-approximation with amortized O (log n ) update time in bipartite graphs [Gup14], later improvedto O (1) update time in general graphs [GLS + O ( √ m ) remained the best-known.We show that a simple application of our congestion-balancing flow technique yields a near-optimal algorithm for (1 − ǫ )-approximate matching in decremental bipartite graphs; achieving asimilar result for non-bipartite graphs remains an open problem. See Section 5.1 for details. Theorem 1.3.
Let G be an unweighted bipartite graph. There exists a decremental algorithm withtotal update time O ( m log ( n ) /ǫ ) (amortized O (log ( n ) /ǫ ) ) that maintains an integral matching M of value at least µ ( G )(1 − ǫ ) , where G always refers to the current version of the graph. Thealgorithm is randomized, but works against an adaptive adversary; if we allow the algorithm toreturn a fractional matching instead of an integral one, then it is deterministic. We usually refer to n as the number of vertices in a graph. We use ˜ O ( · ) and ˜Ω( · ) to hide poly log n factors in the big-oh notations. Similarly, we use b O ( · ) and b Ω( · ) to hide n o (1) factors.Graphs in this paper are directed. Given a graph G , the reverse graph G (rev) of G is obtainedfrom G by reversing the direction of every edge in G . For any subset S, T ⊆ V , E ( S, T ) is a setof directed edges ( u, v ) where u ∈ S and v ∈ T . Let G [ S ] denote the induced subgraph on S . Let w : E → R be an edge weight function of G . Given F ⊆ E , let w ( F ) = P e ∈ F w ( e ) be the totalweight of F ; more generally, for any function g on the edges g ( F ) = P e ∈ E g ( e ). The weightedin-degree and out-degree of a vertex u are deg in ( u ) = w ( E ( V, u )) and deg out ( u ) = w ( E ( u, V )),respectively. The weighted degree of u is deg( u ) = deg in ( u ) + deg out ( u ). The volume of a set S isvol( S ) = P u ∈ S deg( u ). Several of our subroutines on expanders will use small fractional weights.For any S with vol( S ) ≤ vol( V \ S ) we refer to ( S, V \ S ) as a cut in G . Let δ out ( S ) = w ( E ( S, V \ S )) and δ in ( S ) = w ( E ( V \ S, S )) denote the total weight of edges going out and comingin to S , respectively. We say that cut ( S, V \ S ) is ǫ -balanced if vol( S ) ≥ ǫ vol( V ), and it is φ -sparseif min { δ in ( S ) , δ out ( S ) } < φ vol( S ). We say that ( L, S, R ) is a vertex-cut of G if L , S , and R partitionthe vertex set V , and either E ( L, R ) = ∅ or E ( R, L ) = ∅ . Assuming that | L | ≤ | R | , ( L, S, R ) is ǫ -vertex-balanced if | L | ≥ ǫ | V | , and it is φ -vertex-sparse if | S | < φ | L | . We add the subscript G tothe notations whenever it is not clear which graph we are referring to.We say that a data structure supports SCC path-queries in G , if given vertices u and v , it eithercorrectly reports that u and v are not strongly connected in G in O (1) time, or returns a directedsimple path P uv from u to v and a directed simple P vu from v to u . We say that the data structurehas almost path-length query time if, whenever a path P is returned, the data structure takes only b O ( | P | ) time to output the path. We emphasize that the returned path must be simple. A decremental graph G is a graph undergoing a sequence of deletions of edges and of isolatedvertices. There is an easy reduction from decremental SSR from source s to decremental SCC: justadd an edge from every v ∈ V to s . We start with the definition of directed expanders which are the central object of this paper. Otherwise one can arbitrarily increase the length of the returned path through cycles and hence it can be trivialto achieve almost path-length query time. efinition 3.1 (Expanders) . A directed graph G is a φ -vertex expander if it has no φ -vertex-sparsevertex-cut. Similarly, G is φ -(edge) expander if it has no φ -sparse cut. Intuitively, expanders are graphs that are “robustly connected” and, in particular, they arestrongly connected. It is well-known that many problems become much easier on expanders. So,given a problem on general graph, we would like to reduce the problem to expanders.It turns out that every undirected graph admits the following expander decomposition : for any φ >
0, a ˜ O ( φ )-fraction of vertices/edges can be removed so that the remaining is a set of vertex-disjoint φ -vertex/edge expander. Unfortunately, this is impossible in directed graphs. Consider,for example, a DAG. However, a DAG is the only obstacle; for any φ >
0, we can remove a ˜ O ( φ )-fraction of vertices/edges, so that the remaining part can be partitioned into a DAG and a set ofvertex-disjoint φ -vertex/edge expanders. This observation can be made precise as follows. Fact 3.2 (Directed Expander Decomposition) . Let G = ( V, E ) be any directed n -vertex graph and φ > be a parameter. There is a partition { R, X , . . . , X k } of V such that1. | R | ≤ O ( φn log n ) ;2. G [ X i ] is a φ -vertex expander for each i ;3. Let D be obtained from G by deleting R and contracting each X i . Then, D is a DAG. The edge version of Fact 3.2 can be stated as follows: for any unweighted m -edge graph G =( V, E ), there is a partition { X , . . . , X k } of V and R ⊂ E where | R | ≤ O ( φm log m ), each G [ X i ] isa φ -expander, and D is a DAG (where D is defined as above). It can be generalized to weightedgraphs as well.This decomposition motivates the framework of our algorithm, although for the sake of efficiencywe only maintain an approximate version (see Invariant 4.2 below.) The decomposition suggeststhat we need four main ingredients:1. a dynamic expander decomposition in directed graphs,2. a fast algorithm on vertex-expanders,3. a fast algorithm on DAGs, and4. a way to deal with the small remaining part R .Our algorithm will run in time b O ( m | R | ) = b O ( mn / ), as we choose φ = n − / . Note that we donot work with edge-expanders because then R would have size | R | = ˜ O ( φm ), which is too big forus. See Section 4 for how all components fit together.Here, let us focus on fast algorithms on expanders. One of our main tasks is to certify that agiven (sub)-graph G is a vertex-expander. This leads us to the notion of embedding: Definition 3.3 (Embedding and Embedded Graph) . Let G = ( V, E ) be a directed graph. An embedding P in G is a collection of simple directed paths in G where each path P ∈ P hasassociated value val( P ) >
0. We say that P has length len if every path P ∈ P contains at most len edges. We say that P has vertex-congestion cong if, for every vertex v ∈ V , P P ∈P v val( P ) ≤ cong Note that an isolated vertex is an expander (in both edge and vertex versions). Although this decomposition is easy to prove by simply recursively cutting a φ -sparse cut, it was never explicitlystated before to our best knowledge. P v is the set of paths in P containing v . We say that P has edge-congestion cong if, for everyedge e ∈ E , P P ∈P e val( P ) ≤ cong where P e is the set of paths in P containing e .Given an embedding P , there is a corresponding weighted directed graph W where, for eachpath P ∈ P from u to v , there is a directed edge ( u, v ) with weight val( P ). We call W an embeddedgraph corresponding to P and say that P embeds W into G .The following fact shows that, to certify that G is a vertex expander, it is enough to embed an(edge)-expander W into G with small congestion. Fact 3.4.
Let G = ( V, E ) be a graph. Let W = ( V, E ′ , w ) be a φ -expander with minimum weighteddegree . If W can be embedded into G with vertex congestion cong , then G is a ( φ/ cong ) -vertexexpander.Proof. Consider a vertex cut (
L, S, R ) in G where | L | ≤ | R | . Suppose that E ( L, R ) = ∅ , otherwise E ( R, L ) = ∅ and the proof is symmetric. Observe that each edge e ∈ E W ( L, V \ L ) in W correspondsto a path in G that goes out of L and, hence, must contain some vertex from S . So the total weight ofthese edges in W can be at most δ outW ( L ) ≤ | S | · cong . At the same time, δ outW ( L ) ≥ φ vol W ( L ) ≥ φ | L | as W is a φ -expander with minimum weighted degree 1. So | S | ≥ φ cong | L | as desired.In our actual algorithm, instead of certifying that G is a vertex expander (i.e. G has no sparsevertex-cut), we relax to the task to only certifying that G has no balanced sparse vertex-cut. Thismotivates the definition of φ -witness which is used throughout the paper: Definition 3.5 (Witness) . We say that W is a φ -witness of G if V ( W ) ⊆ V ( G ), W is a b Ω(1)-(edge)-expander where 9 / /
2, and there is anembedding of W into G with vertex-congestion 1 /φ . (Note that E ( W ) does not have to be a subsetof E ( G ).) We say that W is a φ -short-witness if it is a φ -witness and the embedding has length b O (1 /φ ). We say that W is a large witness if | V ( W ) | ≥ | V ( G ) | / We sometimes informally refer to a graph that contains a large witness as an almost vertex-expander . This is because of the below fact whose proof is similar to Fact 3.4.
Fact 3.6.
Let G = ( V, E ) be a graph that contains a large φ -witness W . Then G has no / -vertex-balanced ( φ/n o (1) ) -vertex-sparse vertex cut. Now, we have reduced the problem of certifying an almost vertex-expander to maintaining alarge witness. Although finding a low congestion embedding in vertex expanders can be done veryefficiently in the static setting (using the well known cut-matching game), there is one crucialobstacle in the dynamic setting.Consider the following simple scenario. We start with a complete graph G and parameter φ = b Ω(1). A standard (static) construction of a large φ -witness runs in b O ( m ) time and gives an unweighted b Ω(1)-expander W where all vertex degrees are Θ(log( n )). Let P be the embedding of W . Observe that each path from P has value 1 and |P| = O ( n log n ).Unfortunately, once the adversary knows P , he can destroy each embedding path P ∈ P bydeleting any edge in P . In total, he can delete only O ( n log n ) edges in G to destroy the wholeembedding of W . The algorithm would then have to construct a new witness, which the adversarycould again destroy with O ( n log n ) deletions. This process continues until G has a balanced, sparsevertex-cut, which might not happen until Ω( n ) deletions. That is, this standard approach requiresthe algorithm to re-embed a new witness ˜Ω( n ) times, which is not only slow, but requires too manychanges to the witness. The constant 9 /
10 is somewhat arbitrary.
6o overcome this obstable, we use the idea called congestion balancing to maintain a witness W that only needs to be re-embedded ˜ O (1 /φ ) times throughout the entire sequence of deletions(formally stated in Theorem 4.3). As a warm-up to the proof of Theorem 4.3, we show in Section 5.1how to apply this idea to the simpler bipartite matching problem. In this section, we state all the algorithmic components formally and show how to combine themto prove Theorem 1.1. As we mentioned in Section 3, our framework needs A dynamic expanderdecomposition a fast algorithm on vertex expanders, a fast algorithm on DAGs, and away to deal with the small remaining part ˆ S .It turns out that the existing algorithm of Lacki (unrelated to expanders) for separating outany small set of vertices [Lac11] is a handy tool for taking care of the DAG part and the smallremaining part, and allows us to focus on almost vertex expanders. This algorithm has previouslyused in a similar way in [CHI + Proposition 4.1 (see [Lac11, CHI + . Let G = ( V, E ) be a decremental graph. Let A be a datastructure that maintains a monotonically growing set S ⊆ V and after every adversarial updatereports any additions made to S and maintains the SCCs in G \ S explicitly in total update time T ( m, n ) and supports SCC path queries in G \ S in almost-path-length query time.Then, there exists a data structure B that maintains the SCCs of G explicitly and supportsSCC path-queries in G (in almost-path-length query time). The total update time is O ( T ( m, n ) + m | S | log n ) , where | S | refers to the final size of the set S . As we usually use G to denote an input graph to each subroutine. We denote the input to thetop-level algorithm by G ∗ = ( V ∗ , E ∗ ). Motivated by the directed expander decomposition fromFact 3.2 and Lacki’s reduction above, we maintain the following invariant: Invariant 4.2.
Our decremental SCC algorithm will maintain an incremental set ˆ S such that | ˆ S | = b O ( n / ) and at the end of processing any update, if the (non-singleton) SCCs of G \ ˆ S are C , ..., C k , then each C i contains a large b Ω(1 /n / ) -short-witness. To ensure that ˆ S remains small,the algorithm will only add set S to ˆ S if S corresponds to some sparse vertex cut ( L, S, R ) . Robust Witness via Congestion-Balancing
Let G be some SCC in G ∗ \ ˆ S at some pointduring the update sequence. To preserve Invariant 4.2, we need a subroutine that maintains a large φ -witness of G where φ = b Ω(1 /n / ). If the subroutine fails to find such a witness, it returns aΩ(1 /n o (1) )-balanced, φ -sparse vertex-cut ( L, S, R ); that is, it certifies that G is far from being avertex expander, and must be further decomposed. (In particular, the top-level algorithm will add S to the boundary set ˆ S and recurse on both L and R .) Our new technique congestion-balancingflow will allow us to construct a robust witness that is suitable to the dynamic setting; see Section5 for more details. Theorem 4.3 (Robust Witness Maintenance) . There is a deterministic algorithm
Robust-Witness ( G, φ ) that takes as input a directed decremental n -vertex graph G and a parameter φ ∈ (0 , / log ( n )] .The algorithm maintains a large (weighted) φ -short-witness W of G using b O ( m/φ ) total updatetime such that every edge weight in W is a positive multiple of /d , for some number d ≤ d avg ,where d avg is the initial average degree of G . The total edge weight in W is O ( n log n ) . After every dge deletion, the algorithm either updates W or outputs a ( φn o (1) ) -vertex-sparse (1 /n o (1) ) -vertex-balanced vertex-cut and terminates.Let W ( i ) be W after the i -th update. There exists a set R of reset indices where | R | = b O ( φ − ) ,such that for each i / ∈ R , W ( i ) ⊇ W ( i +1) . That is, the algorithm has b O ( φ − ) phases such that,within each phase, W is a decremental graph. The algorithm reports when each phase begins. Itexplicitly maintains the embedding P of W into G and reports all changes made to W and P . The reason that W only shrinks between each phase is as follows. Whenever the adversarydeletes some edge e in an embedded path P that corresponds to an edge e ′ in W , we will delete e ′ from W . To guarantee that W remains an expander after edge deletions, we run our new expanderpruning algorithm in directed graphs (Theorem 6.1) on W that further removes a small part from W and guarantees that the remaining is still an expander. Nevertheless, after too many deletions, W will be too small and we need to re-embed W .To highlight the strength of this result, the above theorem shows we only needs to re-embed awitness b O ( φ − ) times throughout the entire sequence of deletions, whereas the standard techniquemight require ˜Ω( n ) re-embeddings in the worst case as mentioned in Section 3. Maintaining Short Distances from a Witness
Consider some SCC G of G ∗ [ V ∗ \ ˆ S ] witha large φ -witness W . We build two separate data structures on G . The first, given any vertex u ∈ V ( G ) \ V ( W ), returns a path between u and some w ∈ V ( W ). The second can answer pathqueries for any w , w ∈ V ( W ). It is easy to see that the two combined can answer SCC path-queries in G . The statement of the first data structure is a bit subtle; we give a formal theorem,followed by some intuition for what the theorem statement means. (See Section D for the proof.) Theorem 4.4.
There is a data structure
Forest-From-Witness ( G, W, φ ) that takes as inputan n -vertex m -edge graph G = ( V, E ) , a set W ⊆ V with | W | ≥ | V | / and a parameter φ > .The algorithm must process two kinds of updates. The first deletes any edge e from E ; the secondremoves a vertex from W (but the vertex remains in V ), while always obeying the promise that | W | ≥ | V | / . The data structure must maintain a forest of trees F out such that every tree T ∈ F out has the following properties: all edges of T are in E ( G ) ; T is rooted at a vertex of W ; every edgein T is directed away from the root; and T has depth b O (1 /φ ) . The data structure also maintains aforest F in with the same properties, except each edge in T is directed towards the root.At any time, the data structure may perform the following operation: it finds a b O ( φ ) -sparsevertex cut ( L, S, R ) with W ∩ ( L ∪ S ) = ∅ and replace G with G [ R ] . (This operation is NOTan adversarial update, but is rather the responsibility of the data structure.) The data structuremaintains the invariant that every v ∈ V is present in exactly one tree from F out and exactly onefrom F in ; given any v , the data structure can report the roots of these trees in O (log( n )) time.(Note that as V may shrink over time, this property only needs to hold for vertex v in the current set V .) The total time spent processing updates and performing sparse-cut operations is b O ( m/φ ) . Although the data structure works for any set W , W will always correspond to a φ -witness inthe higher-level algorithm. The adversarial update that removes a vertex from W corresponds tothe event that the witness shrinks in the higher-level algorithm. The forests F in and F out allowthe algorithm to return paths of length b O (1 /φ ) from any v ∈ V ( G ) to/from W : find the tree thatcontains v and follow the path to the root, which is always in W . The requirement that each treehas low-depth will be necessary to reduce the update time. But once we add this requirement, weencounter the issue that some vertices may be very far from W , so we need to give the data structurea way to remove them from V ( G ). This is the role of the sparse-cut operation: we will show in theproof that if v is far from W , it is always possible to find a sparse vertex cut ( L, S, R ) such that8 is in L and hence removed from G . (The higher-level algorithm will process this operation byadding S to ˆ S , so that L becomes part of a different SCC in G ∗ [ V ∗ \ ˆ S ].) Maintaining Paths Inside the Witness
The second data structure shows how to maintainshort paths between all pairs of vertices in an (edge) expander. The input W will always correspondto a large φ -witness, and will thus have expansion 1 /n o (1) . This data structure is not new to ourpaper, as it is essentially identical to an analogous structure for undirected graphs in [CS20]. Theonly major difference is that we need to plug in our new expander pruning algorithm for directedgraphs (Theorem 6.1). Note that the theorem below will only allow us to find paths in E ( W ), not E ( G ); we show later how to use the embedding of W to convert them to paths in E ( G ). Theorem 4.5.
There is a deterministic data structure
Path-Inside-Expander ( W ) that takes asinput an n -vertex m -edge /n o (1) -expander W subject to decremental updates. Each update candelete an arbitrary batch of vertices and edges from W , but must obey the promise that the resultinggraph remains a φ -expander. Given any query u, v ∈ V ( W ) , the algorithm returns in n o (1) timea directed simple path P uv from u to v and a directed simple path P vu of v to u , both of length atmost n o (1) . The total update time of the data structure is b O ( m ) . The Algorithm
The proof of Theorem 1.1 combines all the above ingredients. See Algorithm 1 for pseudocode.
Analysis Sketch
The full details of the analysis are left for Section A.1. The argument hasthree main parts. The first is that each call
SCC-Helper ( G ) re-initializes data structure in Line14 only b O (1 /φ ∗ ) times, since that is the number of phases in Robust-Witness (Theorem 4.3).The second is that every time a vertex v participates in a new call SCC-Helper ( G ), | V ( G ) | musthave decreased by a (1 − /n o (1) ) factor, so v participates in b O (1) calls. The third is that we alwayshave | ˆ S | = b O ( nφ ∗ ) = b O ( n / ), because vertices added to ˆ S always correspond to a φ ∗ -sparse cut.The basic idea for the query is that given any u, v in some SCC C ∈ C with Witness W , weuse Forest-From-Witness to find paths from u and v to W and use Path-Inside-Expander to complete the path inside W . The complication is that the resulting path P might not be simple.We can always extract a simple path P ′ ⊆ P , but the query time would be proportional to | P | , not | P ′ | . We thus use a more clever query procedure; see Section 8 for details. Comparison to Previous Work
Our framework combines many old and new techniques, so webriefly categorize them. Proposition 4.1 and Theorem 4.4 follow from ideas in two earlier papers[Lac11, CHI +
16] that are unrelated to expanders. Theorem 4.5 easily generalizes from an existingresult for undirected graphs [CS20], but only once our new directed primitives are in place.Our primary new contributions are threefold: A new framework which integrates dynamicexpander decomposition with earlier tools for directed graphs in [Lac11, CHI + Robust wit-ness maintenance and congestion-balancing flow, and New primitives for directed expanders –especially directed expander pruning (Theorem 6.1) and cut-matching game (Theorem 7.1) – whichare crucial for Theorems 4.3 and 4.5 in this section.
In this section, we present Algorithm
Robust-Witness from Theorem 4.3. The algorithm hasseveral components, but the main innovation is a new approach we call congestion-balancing flow.9 lgorithm 1:
Maintaining an SCC-oracle for the main graph G ∗ (Theorem 1.1) Initialize ˆ S ← ∅ , φ ∗ ← n − / , C ← { V ∗ } // C is the collection of SCCs in G ∗ \ ˆ S Initialize the framework of Proposition 4.1 Run
SCC-Helper ( G ∗ ) // Will always run SCC-Helper ( C ) for every SCC C ∈ C Procedure
Setup for
SCC-Helper ( G ) Initialize
Robust-Witness ( G, φ ∗ ). Let W be the large φ ∗ -witness maintained Initialize
Path-Inside-Expander ( W ) Initialize
Forest-From-Witness ( G, W, φ ∗ ) Procedure
Updating the data structures in
SCC-Helper ( G ) All adversarial edge deletions are fed to
Robust-Witness and
Forest-From-Witness if Robust-Witness in Line 5 terminates with a cut ( L, S, R ) then ˆ S ← ˆ S ∪ S ; remove V ( G ) from C ; add L, R to C Initialize
SCC-Helper ( G [ L ]) and SCC-Helper ( G [ R ]) Terminate call
SCC-Helper ( G ) // V ( G ) is decomposed into L and R if Robust-Witness in Line 5 starts a new phase and hence creates a new W then Initialize new data structures
Path-Inside-Expander ( W ) and Forest-From-Witness ( G, W, φ ∗ ) and terminate existing ones in Lines 6 and 7 if Robust-Witness deletes vertices/edges from W within a phase then Feed these deletions as a batch deletion to
Path-Inside-Expander ( W ) if vertex v is deleted from W then feed to Forest-From-Witness ( G, W, φ ∗ ) anupdate that removes v from W if Forest-From-Witness returns a b O ( φ ∗ ) -sparse vertex cut ( L, S, R ) and replaces G with G [ R ] then ˆ S ← ˆ S ∪ S ; add L to C ; replace G ∈ C with G [ R ] // L is removed from SCC G Initialize
SCC-Helper ( G [ L ]) // L is a new SCC in G ∗ [ V ∗ \ ˆ S ]To highlight this approach, we first show how it can be used to yield new results for the simplerproblem of decremental bipartite matching (Theorem 1.3). Informal Overview:
We focus on the following problem: say that we are given a bipartite graph G = ( L ∪ R , E ) with | L | = n and | E | = m , and say that the graph has a perfect matching (i.e. µ ( G ) = n ). We assume that n is a power of 2. Let ǫ < G always refer to the current version of thegraph. The algorithm must maintain a fractional matching in G of size ≥ (1 − ǫ ) n OR certify that µ ( G ) ≤ (1 − ǫ ) n , at which point it can terminate. In other words, the algorithm must maintain amatching until µ ( G ) decreases by a (1 − ǫ ) factor. The total update time should be ˜ O ( m ). Thisalgorithm gets us most of the way to proving Theorem 1.3. (The conversion from fractional tointegral matching is done via the black-box of Wajc [Waj20].)Consider the following lazy approach. Start by computing a matching M of size (1 − ǫ ) n in O ( m ) time (using e.g. Hopcroft-Karp [HK73]). The adversary must now delete Ω( ǫn ) edges before M has size < (1 − ǫ ) n , at which point we compute a new matching. This algorithm is too slow:10e spend O ( m ) time to compute a matching that survives for O ( n ) deletions, for a total updatetime of O ( m /n ).We would like to construct a robust matching that can survive for more than Ω( n ) deletions.We will construct a fractional matching M that attempts to put low value on each edge; this way,the adversary must delete many edges to remove ǫn value from M . It may not be possible to putlow value on all edges, as some edges may be “crucial” for any matching, but we present a techniquefor efficiently balancing the edge-congestion. We will then show that over the entire sequence ofdeletions there can only be a small number of crucial edges, so the adversary cannot profit toooften from deleting them.Our algorithm will run in phases. Each edge is given capacity κ ( e ), which intuitively captureshow crucial e is. The algorithm initially sets κ ( e ) = 1 /n , but κ ( e ) can increase over time; thesecapacities transfer between phases. At the beginning of each phase, we first run Hopcroft-Karp toensure that µ ( G ) ∼ (1 − ǫ ) n ; if not, we can terminate. So we can assume that we always have µ ( G ) ≥ (1 − ǫ ) n . We now try to compute a fractional matching M such that val( M ) ≥ (1 − ǫ ) n and val( e ) ≤ κ ( e ) ∀ e ∈ E . If we find such an M , we use the lazy approach from before: we waituntil the adversary deletes ǫn value from M , and then we initiate a new phase. If the algorithmfails to find such an M , it instead returns a cut C where the edge-capacities are too small. Thealgorithm then doubles κ ( e ) for all e ∈ C , and again tries to compute a matching M . This processwill eventually terminate because we know that µ ( G ) ≥ (1 − ǫ ) n ; thus, once the edge-capacitiesare high enough, there will certainly be a matching M with val( M ) ≥ (1 − ǫ ) n . (Note that wenever increase κ ( e ) beyond 1, because a matching already has vertex capacity 1, so any edges withcapacity ≥ total number of doubling steps, across all phases,is only O (log( n )). Assuming this fact, let K = P e ∈ G κ ( e ). We will show that each doubling steponly doubles κ along a low-capacity cut, so K only increases by O ( n ). Since the number of doublingsteps is O (log( n )), we always have K = O ( n log( n )). This upper bound on K in turn implies thatthere are only O (log( n )) phases, because each phase must delete Ω( n ) value from the matching M ,which clearly involves deleting at least Ω( n ) edge-capacity.To show that the number of doubling steps is O (log( n )), we introduce the following potentialfunction Π( G, κ ). Let the cost of each e be c ( e ) = log( nκ ( e )). Now, let M be the set of all integralmatchings M (ignoring edge capacities) of size at least (1 − ǫ ) n ; recall from above that we canassume M 6 = ∅ . Define Π( G, κ ) to be the minimum cost among all matchings from M . It is easyto see that each Π( G, κ ) is initially zero and is non-decreasing. Moreover, since every edge has κ ( e ) ≤ c ( e ) ≤ log( n ), we also have Π( G, κ ) = O ( n log( n )) at all times. We now argue (at ahigh level) that each doubling step increases Π( G, κ ) by Ω( n ). Let C be the cut that prevented thealgorithm from finding a fractional matching M with val( M ) ≥ (1 − ǫ ) n . Any integral matching M ∈ M has val( M ) ≥ (1 − ǫ ) n , so it must have ≥ ǫn edges that cross C . Moreover, sincethe cut-capacity is small, Ω( ǫn ) of these crossing edges must have capacity <
1. The doublingstep then doubles κ ( e ) for each such edge, increasing each c ( e ) by 1, and thus increasing c ( M ) byΩ( ǫn ) = Ω( n ), as desired. Formal Description and Analysis:
We now formally state our main subroutine for decrementalmatching; to avoid the assumption above that µ ( G ) = n , the input parameter µ controls the targetmatching-size. Our decremental matching result (Theorem 1.3) follows quite easily from the lemmabelow; see Section A.2 for details. (The conversion from fractional to integral matching is done viaa black box of Wajc [Waj20].) 11 emma 5.1. Let G = ( L ∪ R , E ) be an unweighted bipartite graph subject to a sequence ofadversarial edge deletions. Given any parameters µ ∈ [1 , n ] , ǫ ∈ (0 , , there exists an algorithm Robust-Matching ( G, µ ) which processes the deletions in total update time O ( m log ( n ) /ǫ ) andhas the following guarantees:1. When the algorithm terminates, we have µ ( G ) ≤ µ (1 − ǫ ) .2. Until the algorithm terminates, it maintains a fractional matching M with val( M ) ≥ µ (1 − ǫ ) . See Algorithm 2 for pseudocode of
Robust-Matching . The algorithm relies on the followingstatic subroutine for finding a fractional matching of target size µ that obeys edge capacities κ ( e )(see Appendix B.4.3 for the proof). Lemma 5.2.
There exists an algorithm
Matching-Or-Cut ( G, κ, µ, ǫ ) . The input is a graph G = ( L ∪ R, E ) with | E | = m and | L | = n , a positive edge-capacity function κ , and parameters µ ∈ [1 , n ] and ǫ ∈ (0 , . In O ( m log( n ) /ǫ ) time the algorithm returns one of the following:1. A fractional matching M of size µ (1 − ǫ ) such that ∀ e ∈ E, val( e ) ≤ κ ( e ) .2. Sets S L ∈ L and S R ∈ R such that κ ( S L , R \ S R ) + | S R | ≤ µ + | S L | − n . Observation 5.3.
Case 2 of the above lemma certifies the non-existence of a large matching. Inparticular, any matching with edge-capacities κ can achieve value at most κ ( S L , R \ S R ) + | S R | from vertices in S L , so the matching has value at most ( κ ( S L , R \ S R ) + | S R | ) + ( n − | S L | ) ≤ ( µ + | S L | − n ) + ( n − | S L | ) = µ . Algorithm 2:
Algorithm
Robust-Matching ( G = ( L ∪ R , E ) , µ, ǫ ) Assume that | L | = n is a power of 2 // otherwise replace n with n ′ = 2 ⌈ log ( n ) ⌉ Initialize G = ( L ∪ R, E ) ← G Initialize κ ( e ) = 1 /n for every edge e ∈ E Procedure
Begin New Phase // execute before processing adversarial deletions if Matching-Too-Small ( G, µ, ǫ ) then Terminate Algorithm Repeat until
Matching-Or-Cut ( G, κ, µ (1 − ǫ ) , ǫ ) returns a matching Let S L , S R be the cut-sets returned by Matching-Or-Cut Let E ∗ = { e ∈ E ( S L , R \ S R ) | κ ( e ) < } \∗ If e ∈ E ∗ then κ ( e ) ≤ / \∗ κ ( e ) ← κ ( e ) for all e ∈ E ∗ Set M to be the matching returned by Matching-Or-Cut Counter ← // tracks value deleted from M due to deletions in G Procedure
Processing Deletion of edge ( u, v ) Remove edge ( u, v ) from G ; if ( u, v ) ∈ M then remove it from M Counter ← Counter + val( u, v ) if Counter ≥ ǫµ then RESET PHASE: go back to Line 5 // capacities κ NOT reset between phases Procedure
Matching-Too-Small ( G, µ, ǫ ) Compute a (1 − ǫ )-approximate matching M in G in O ( m/ǫ ) time (using e.g.Hopkroft-Karp) if | M | < µ (1 − ǫ ) then return True; else return FalseNow, we analyze Algorithm 2. 12 bservation 5.4. Throughout Algorithm 2, κ is non-decreasing and in particular can only changevia doubling in Line 9. Moreover, if κ ( e ) < then κ ( e ) ≤ / , and we always have κ ( e ) ≤ ∀ e ∈ E ( G ) (here we use the assumption n is a power of ; see Line 1). We now introduce our potential Π( G = ( V, E ) , κ ), and state a few simple observations. Definition 5.5 (Min-cost Matching) . Recall that κ ( e ) ≥ /n ∀ e ∈ E . Let M contain all in-tegral matchings M in G for which | M | ≥ (1 − ǫ ) µ . Define the cost of edge e to be c ( e ) =log( nκ ( e )), and note that c ( e ) is always non-negative. For any fractional matching M , define c ( M ) = P e ∈ E val( e ) c ( e ). Define Π( G, κ ) = min M ∈M c ( M ); we refer to the matching M thatachieves this minimum as the min-cost matching. If M = ∅ then Π( G, κ ) = ∞ . Observation 5.6. If κ ( e ) increases for some edge e , then Π( G, κ ) cannot decrease as a result.Similarly, an edge deletion cannot decrease Π( G, κ ) . Observation 5.7.
At the beginning of Algorithm 2 we have Π( G, κ ) = 0 (because for all edges κ ( e ) = 1 /n , so c ( e ) = 0 ). Moreover, Π( G, κ ) only increases throughout the algorithm and if at anypoint Π( G, κ ) = ∞ , then it will remain infinite forever (this follows from the observations above,as well as the fact that G is decremental). Observation 5.8. If Π( G, κ ) = ∞ then invoking Matching-Too-Small in Line 5 returns Trueand terminates the algorithm.
We have established that Π start at 0 and only increases. We now show that as long as thealgorithm does not terminate, Π is never too large.
Lemma 5.9.
Consider any phase in which the algorithm did not terminate. Let G be the graphand κ the capacities at the end of initialization of this phase (Line 10), but before any deletionshave been processed. Then Π( G, κ ) = O ( µ log( n )) .Proof. Since Line 5 did not terminate, there must exist a matching M in G with | M | ≥ (1 − ǫ ) µ .Let M ∗ be an arbitrary subset of M with ⌈ (1 − ǫ ) µ ⌉ edges. Note that M ∗ is a matching with | M ∗ | ≤ ⌈ µ ⌉ ≤ µ . Every edge e has κ ( e ) ≤ c ( e ) ≤ log( n ), so Π( G, κ ) ≤ c ( M ∗ ) ≤ | M ∗ | log( n ) ≤ µ log( n ). Definition 5.10.
Let E be the edge set of the initial graph G . We define κ ( E ) = P e ∈ E κ ( e );If e ∈ E is deleted by the adversary, then κ ( e ) is the capacity of e right before the deletion. Lemma 5.11.
Consider some invocation of
Matching-Or-Cut ( G, κ, µ (1 − ǫ ) , ǫ ) in Line 6 thatreturns cut-sets S L , S R . Let κ be the capacities before the doubling step in Line 9, and κ ′ thecapacities after doubling. We then have:1. k ′ ( E ) ≤ k ( E ) + µ . AND2. Π( G, κ ′ ) ≥ Π( G, κ ) + ǫµ .Proof. The first property is simple. By Lemma 5.2, κ ( E ( S L , R \ S r )) ≤ µ (1 − ǫ ) + | S L | − n ≤ µ .Since E ∗ ⊆ E ( S L , R \ S r ) (see Line 8) and the algorithm doubles all capacities in E ∗ , we have κ ′ ( E ) − κ ( E ) = κ ( E ∗ ) ≤ µ .To prove the second property, note that since the algorithm did not terminate in Line 6, wemust have µ ( G ) ≥ (1 − ǫ ) µ . Now, let M be any matching in G with | M | ≥ (1 − ǫ ) µ . We willshow | M ∩ E ∗ | ≥ ǫn . 13efine E full = E ( S L , R \ S R ) \ E ∗ = { e ∈ E ( S L , R \ S R ) | κ ( e ) = 1 } . Define M ∗ = M ∩ E ∗ , M full = M ∩ E full , M R = M ∩ E ( S L , S R ), M other = M ∩ E ( L \ S L , R ). We know that | M ∗ | + | M full | + | M R | + | M other | = | M | ≥ (1 − ǫ ) µ . On the other hand, we have | M other | ≤ n − | S L | and | M full | + | M R | ≤ | E full | + | S R | ≤ µ (1 − ǫ ) + | S L | − n , where the last inequality follows fromthe guarantee of Lemma 5.2. Combining the two inequalities above yields | M ∗ | ≥ ǫµ , as desired.Now, let c and c ′ be the corresponding cost functions c ( e ) = log( nκ ( e )) and c ′ ( e ) = log( nκ ′ ( e )).let M ′ be the min-cost matching that minimizes c ′ ( M ) for potential function Π( G, κ ′ ). By the aboveargument | M ′ ∩ E ∗ | ≥ ǫµ . For each edge e ∈ M ′ ∩ E ∗ we have κ ′ ( e ) = 2 κ ( e ), so c ′ ( e ) = c ( e ) + 1.Thus, Π( G, κ ′ ) = c ( M ′ ) = c ( M ′ ) + | M ′ ∩ E ∗ | ≥ c ( M ′ ) + ǫµ ≥ Π( G, κ ) + ǫµ , as desired. Corollary 5.12.
In any Execution of Algorithm 3, the total number of times that
Matching-Or-Cut in Line 6 returns a cut is O (log( n ) /ǫ ) . Moreover, we always have κ ( E ) = O ( µ log( n ) /ǫ ) .Proof. First we argue that whenever
Matching-Or-Cut ( G, κ, ... ) is called, Π(
G, κ ) is finite.Note that κ only affects the magnitude of Π( G, κ ), not whether it is finite or infinite. Thus,if Π(
G, κ ) is finite the first time
Matching-Or-Cut is called in a phase, it will be finite ev-ery time
Matching-Or-Cut is called in that phase. We begin every phase with a call to
Matching-Too-Small (Line 7), and by Observation 5.8, if Π(
G, κ ) were infinite, then the al-gorithm would terminate.By Lemma 5.11, every time
Matching-Or-Cut returns a cut, Π increases by at least ǫµ . Thiscompletes the proof of the first statement, when combined with the fact that the potential startsat 0 and never decreases (Observation 5.7), and that if finite the potential is always O ( µ log( n ))(Lemma 5.9). The bound on κ ( E ) then follows from Property 1 of Lemma 5.11. Lemma 5.13.
The total number of phases in any execution of Algorithm 3 is at most O (log( n ) /ǫ ) Proof.
Let Φ del = P e ∈ E del κ ( e ), where E del contains all the edges deleted by the adversary so far(among all phases). Consider any phase that does not terminate the algorithm in Line 5. By Line15, the phase can only end when the adversary deletes at least ǫµ value from the matching for thatphase; since every edge obeys val( e ) ≤ κ ( e ), this implies that over the course of the phase, Φ del increases by at least ǫµ . By Corollary 5.12, we always have Φ del = κ ( E del ) ≤ κ ( E ) = O ( µ log( n ) /ǫ ).Thus, the number of phases is O ([ µ log( n ) /ǫ ] / [ ǫµ ]) = O (log( n ) /ǫ ). Proof of Lemma 5.1.
We are now ready to prove that algorithm
Robust-Matching (Algorithm2) satisfies the requirements of Lemma 5.1. The algorithm can only terminate if the (1 − ǫ )-approximate matching M in Line 18 has size | M | < (1 − ǫ ) µ . But this implies that µ ( G ) ≤| M | / (1 − ǫ ) < µ (1 − ǫ ) / (1 − ǫ ) < µ (1 − ǫ ), as needed in Case 1 of Lemma 5.1.For case 2, consider any phase of the algorithm. At the end of initialization for that phase (Line10), but before any deletions are processed, Lemma 5.2 guarantees that the matching M returnedby Matching-Or-Cut ( G, µ (1 − ǫ ) , ǫ ) has val( M ) ≥ (1 − ǫ )(1 − ǫ ) µ ≥ (1 − ǫ ) µ . By Line 15, thephase ends after the adversary deletes more than ǫµ value from the matching. Thus, throughoutthe phase we have val( M ) ≥ (1 − ǫ ) µ , as desired.We now bound the running time. Each phase is dominated by the run-time of Matching-Or-Cut (Line 6), which is O ( m log( n ) /ǫ ). This subroutine might be run multiple times per phase, all butone of which return a cut. The total time is thus O (( m log( n ) /ǫ ) · ([ Matching-Or-Cut that return a cut])). By Lemma 5.13 and Corollary 5.12, the run-time is O (( m log( n ) /ǫ )(log( n ) /ǫ + log( n ) /ǫ )) = O ( m log ( n ) /ǫ ).14 .2 Overview of Algorithm Robust-Witness
The algorithm for maintaining a witness follows the same congestion-balancing approach as thedecremental matching algorithm, but the details are significantly more involved.The algorithm will again run in phases. Just as algorithm
Robust-Matching began eachphase by checking that the graph contains a large matching, now the algorithm checks that thegraph contains a very large φ -witness; if not, the algorithm is able to find a sparse, balanced cutand terminate. From now on we assume such a witness exists.As described in Section 3, an arbitrary embedding P might not be robust to adversarial dele-tions, because a small number of edges might have most of the flow. To balance the edge-congestion,we introduce a capacity κ ( e ) on each edge. Initially we set κ ( e ) = 1 /d , where d is the average de-gree in the input graph. At each step, the algorithms uses approximate flows and the cut-matchinggame to try to find a witness with vertex congestion ˜ O (1 /φ ) and edge-congestions κ ( e ). If it fails,the subroutine finds a low-capacity cut C ; it then doubles capacities in C and tries again. Since weassume a witness does exist, the algorithm will eventually find a witness once the edge-capacitiesare high enough.Once we have a witness W with embedding P , we use the lazy approach. Say the adversarydeletes an edge ( u, v ). Because our embedding obeyed capacity constraints, this can remove at mostedges from W of total weight at most κ ( u, v ). To maintain expansion, we feed these deletions intoour expander pruning algorithm (Theorem 6.1) to yield a pruned set P , and shrink our witness to W [ V ( W ) − P ]. To guarantee that W remains a large witness, we end the phase once the pruned set P it too large. We will show that we end a phase only after the adversary deletes b Ω( n ) edge-capacityfrom the graph.As with Robust-Matching , the crux of our analysis will be to show that the total of numberof doubling steps is b O (1 /φ ). To do so, we again use costs c ( e ) = log( dκ ( e )) and use a potentialfunction Π( G, κ ) which measures the min-cost embedding in G among all very large φ -witness. Asthe vertex congestion is 1 /φ , this potential Π( G, κ ) is at most n/φ . Also, we are able to show thateach doubling step increases the potential by b Ω( n ) using an argument that is more involved thanthe one for matching. Therefore, there are at most b O (1 /φ ) doubling steps as desired.Given this bound, we can bound the total number of phases: each doubling step adds at most n to the total capacity κ , and the initial capacity is at most 1 /d · m = n . So the final total capacityis at most b O ( n/φ ). As each phase must delete b Ω( n ) capacity, there are at most b O (1 /φ ) phases. Robust-Witness
The rest of this section is devoted to the formal proof of Theorem 4.3. For convenience, we restatethe theorem below
Theorem 4.3 (Robust Witness Maintenance) . There is a deterministic algorithm
Robust-Witness ( G, φ ) that takes as input a directed decremental n -vertex graph G and a parameter φ ∈ (0 , / log ( n )] .The algorithm maintains a large (weighted) φ -short-witness W of G using b O ( m/φ ) total updatetime such that every edge weight in W is a positive multiple of /d , for some number d ≤ d avg ,where d avg is the initial average degree of G . The total edge weight in W is O ( n log n ) . After everyedge deletion, the algorithm either updates W or outputs a ( φn o (1) ) -vertex-sparse (1 /n o (1) ) -vertex-balanced vertex-cut and terminates.Let W ( i ) be W after the i -th update. There exists a set R of reset indices where | R | = b O ( φ − ) ,such that for each i / ∈ R , W ( i ) ⊇ W ( i +1) . That is, the algorithm has b O ( φ − ) phases such that,within each phase, W is a decremental graph. The algorithm reports when each phase begins. Itexplicitly maintains the embedding P of W into G and reports all changes made to W and P . Robust-Matching we began each phase by making sure that the matching was stilllarge enough (Line 5), so in
Robust-Witness we begin each phase by running
Certify-Witness (Line 7) to ensure that the graph is still close enough to a vertex expander. Formally, we certifythat there exists a very large φ -witness W that can be embedded into G . Note that we will neveractually use this witness; we only need to ensure that it exists, as this will allow us to boundthe running time of the algorithm. If such a witness does not exist, we return a balanced, sparsevertex-cut and terminate the entire algorithm.We start with a subrotuine Vertex-Congested-Matching that is given two vertex sets
A, B and uses approximate flow to embed a single matching between them with small vertex-congestion,or returns a balanced, sparse vertex-cut. We then show how to use this subroutine as the matching-player in the cut-matching game (Theorem 7.1) to embed a witness. In the algorithms below, φ controls the congestion of the embedding, while ǫ controls the size of the witness. Think of ǫ as1 /n o (1) and of φ as n − / . Lemma 5.14.
There is a deterministic algorithm
Vertex-Congested-Matching ( G, A, B, φ, ǫ ) that, given a directed n -vertex graph G = ( V, E ) , two disjoint terminal sets A, B ⊂ V where n/ ≤ | A | ≤ | B | , φ ∈ (0 , , and ǫ ∈ (0 , , in ˜ O ( m/φ ) time, either • returns a O ( φ log n ) -vertex-sparse Ω( ǫ ) -vertex-balanced vertex cut ( L, S, R ) , or • a directed (integral) matching M of size at least (1 − ǫ ) | A | from A to B such that there is anembedding P that embeds M into G with vertex congestion /φ . The idea of the above algorithm is to perform ˜ O (1 /φ ) blocking flow computations. We deferthe proof to Appendix B.4.4.The following algorithm finds either a b Ω( ǫ )-vertex-balanced sparse cut, or a φ -witness W that isunweighted and | V ( W ) | ≥ (1 − ǫ ) n . As | V ( W ) | is very close to n , we say W is a very large witness. Theorem 5.15.
There is a deterministic algorithm
Certify-Witness ( G, φ, ǫ ) that takes as inputa directed n -vertex graph G = ( V, E ) , φ ∈ (0 , / log ( n )] , and ǫ ∈ (0 , in b O ( m/φ ) time, either • finds a ˜ O ( φ ) -vertex-sparse Ω( ǫ/n o (1) ) -vertex-balanced cut S , or • certifies that there exists a φ -witness W of G such that | V ( W ) | ≥ (1 − ǫ ) n and every edge in W has weight at least . Let α ex = 1 /n o (1) be the precise expansion factor of W guaranteedby this lemma (we will use this parameter in other lemmas).Proof. Although there a lot of technical details involved, conceptually speaking the lemma followsquite easily from the cut matching game (Theorem 7.1) and
Vertex-Congested-Matching (Lemma 5.14). Define R = O (log( n )) to be the maximum number of rounds in the cut-matchinggame. Define φ ′ = 4 Rφ < ǫ ′ = ǫ/β , where β = n o (1) will be set later in the proof.Now, we initiate the cut-matching game. The cut player from theorem 7.1 provides the terminalsets A i , B i at every round i . The algorithm of this lemma then acts as the matching player: in round i , it either return a sparse cut and terminates or embeds matchings −→ M i and ←− M i . In particular, foreach round i of the cut-matching game, the algorithm runs Vertex-Congested-Matching ( G, A i , B i , φ ′ , ǫ ′ )as well Vertex-Congested-Matching ( G, B i , A i , φ ′ , ǫ ′ ) which tries to embed a matching in a re-verse direction. We focus on the first of these two invocations, as they are symmetrical.If the subroutine Vertex-Congested-Matching returns a cut (
L, S, R ), then our algorithmreturns the same cut and terminates. Lemma 5.14 guarantees that this cut is O ( φ ′ log( n )) = ˜ O ( φ )-sparse and Ω( ǫ ′ ) = Ω( ǫ/n o (1) )-vertex-balanced, as desired. So we assume from now when it returnsa path set P i at every round. 16ow let us say that Vertex-Congested-Matching returns a path set P i that embeds match-ing M ∗ i from A to B . We cannot use this exact matching in the cut matching game because The-orem 7.1 requires a matching of value | A | (a perfect matching), while Property 2c only guaranteesa matching of value | A | (1 − ǫ ′ ). We thus construct another matching F i from A to B ( F for fake)such that M ∗ i ∪ F i is a matching of value | A | ; it is easy to construct such an F i by starting with M ∗ i and repeatedly adding edges from free vertices in A to free vertices in B . (Note that we do notembed these fake edges into G .)Let M ∗ be the union of all the M ∗ i , including those “reverse-direction” matching from B i to A i .Let F be the union of all the F i , including those in the reverse direction. Let W ∗ = ( V, M ∗ ∪ F ).Theorem 7 . W ∗ is a α cmg = 1 /n o (1) expander. Note, however, that we cannotreturn W ∗ as our witness because there is no path set corresponding to edges in F (we neverembedded the edges in F ). We also cannot simply remove F as M ∗ on its own might not be anexpander.Instead, we apply directed expander pruning from Theorem 6.1 to W ∗ . We feed in all theedges in F as adversarial deletions in the pruning algorithm; since the expansion of W ∗ is atleast α cmg = 1 /n o (1) , we can use Corollary 6.2. Let P be the set returned by pruning, and set W = W ∗ [ V \ P ].We now show that W is a φ -witness of the desired size. Let parameter L for pruning be chosenaccording to Corollary 6.2, and define γ = γ L ( α cmg ) as the parameter from Theorem 6.1; note that γ = n o (1) . By Theorem 6.1, the expansion factor of W is at least 1 /γ = 1 /n o (1) . We can thus setparameter α ex in the lemma statement to be α ex = 1 /γ . Now, recall that we set ǫ ′ = ǫ/β . We nowdefine β = γ log ( n ). By Lemma 5.14, each set F i has size at most ǫ ′ | A | ≤ ǫ ′ n = ǫnγ log ( n ) , so F hassize at most O ( Rǫnγ log ( n ) ) = O ( ǫnγ log( n ) ), where the last step follows from R = O (log( n )). Thus, byTheorem 6.1 the pruned set P has volume in W at most vol W ( P ) ≤ | F | · γ = O ( ǫn/ log( n )). As W has maximum degree 2 R , so | P | < ǫn and | V ( W ) | = | V | − | P | ≥ (1 − ǫ ) n . Finally, every edge hasweight 1 because Vertex-Congested-Matching returns integral matchings, so every vertex in W has weighted degree at least 1 (there are no isolated vertices because W is an expander.)We must now show that W can be embedded into G . We use the embedding P W ⊂ P that isformed by taking all paths in P that start AND end in V \ P , where P is the pruned set from theprevious paragraph (note that the middle of the path may still leave V \ P ). It is easy to see thatevery edge in W has a corresponding path in P W , and that the vertex congestion in P W is strictlysmaller than in P . By Lemma 5.16, each P i has vertex-congestion φ ′ , so since there are at most 2 R such P i (one in each direction per round of the cut-matching game, which has at most R rounds), P has a vertex-congestion of 2 R/φ ′ < /φ , as desired.Finally, we analyze the running time of the algorithm. Each call to Vertex-Congested-Matching has a running time of b O ( m/φ ′ ) = b O ( m/φ ); the algorithm makes O ( R ) = O (log( n )) calls, for a totalrun-time of b O ( m/φ ). The time to construct each F i is only O ( n ). Finally, by Corollary 6.2, thetime for pruning is b O ( n ) as W ∗ has O ( nR ) unweighted edges. We now present an algorithm that tries to find a witness which also obeys the edge capacities κ ( e ). We start by presenting a subroutine that uses an approximate flow algorithm (Lemma B.8)to embed a single matching. We then combine this with the cut-matching game to embed a wholewitness. If the algorithm fails to find a witness, then one of the approximate-flow computationsmust have had insufficiently high capacity. We then return the cut L, S, R that certifies this failure.Note that
L, S, R might not be sparse in the uncapacitated graph G ; instead we refer to it as a17 ottleneck cut because the capacities are too low.Note that the parameter d establishes a minimum edge-capacity of 1 /d . We will end up setting d to be around the average degree in the input graph. Since the cut-matching game yields a witnesswith total weight ˜ O ( n ), the witness will have a total of ˜ O ( nd ) = ˜ O ( m ) edges, which will allow usto efficiently run our pruning algorithm on the witness. Lemma 5.16.
There is an algorithm
Embed-Matching ( G, κ, A, B, φ, ǫ, d ) with following inputs:an m -edge n -vertex graph G = ( V, E ) , terminal sets A, B ⊂ V where n/ ≤ | A | ≤ | B | , parameters φ ∈ (0 , / and ǫ ∈ (0 , , a number d = O ( d avg ) where d avg is the average degree in G , and anedge capacity function κ where κ ( E ) = b O ( n/φ ) and, for each e ∈ E , κ ( e ) ∈ [1 /d, /φ ] is a positivemultiple of /d . In b O ( m/ ( ǫφ )) time the algorithm returns either1. a partition L, S, R of V where ǫn ≤ | L | ≤ n/ and κ ( E ( L, R )) + | S | / (2 φ ) ≤ | L | − ǫn
2. a collection P of directed paths from vertices in A to vertices in B such that(a) Each path P ∈ P has associated value val( P ) which is a positive multiple of /d ,(b) Each path P ∈ P has length at most ˜ O (1 / ( φǫ )) .(c) The total value P P ∈P val( P ) ∈ [(1 − ǫ ) | A | , | A | ] ,(d) For each v ∈ V , P P ∈P v val( P ) ≤ /φ where P v consists of all paths in P that contains v .(e) For each e ∈ E , P P ∈P e val( P ) ≤ κ ( e ) where P e consists of all paths in P that contain e .Proof. First, to allow for vertex capacities, we create a graph G ′ where each v ∈ V is split into twovertices v in and v out . All edges entering v now enter v in and all edges leaving v leave v out ; there isalso a directed edge ( v in , v out ).We invoke Global Flow (Lemma B.8) on G ′ = ( V ′ , E ′ ) with the following input: ∆( v in ) =1 ∀ v ∈ A and T ( v out ) = 1 ∀ v ∈ B . Set z = 2 ǫn , C min = 1 /d . The capacity c ( e ) of edge e ∈ E isset to κ ( e ) and the capacity of every edge ( v in , v out ) is set to 1 /φ . Note that c ( E ′ ) = n/φ + c ( E ) = n/φ + κ ( E ) = b O ( n/φ ), where the last inequality follows from the bound on κ ( E ) assumed in thelemma. Finally, set parameter h in Lemma B.8 to be h = c ( E ′ )
10 log( c ( E ′ )) ǫn = b O ( ǫφ ). By LemmaB.8, the running time is then b O ( mh + ∆( V ) h/c min ) = b O ( h ( m + nd )) = b O ( mh ) = b O ( mǫφ ).First consider the case that Lemma B.8 returns an (edge) cut S ′ in G ′ . We transform this into a(vertex) cut ( L, S, R ) in G as follows: L = { v | v in ∈ S ′ ∧ v out ∈ S ′ } , S = { v | v in ∈ S ′ ∧ v out / ∈ S ′ } , R = { v | v in / ∈ S ′ } . Lemma B.8 guarantees that c ( E ( S ′ , V \ S ′ )) ≤ ∆( S ′ ) − z + c ( E ′ ) 10 log( c ( E ′ )) h ≤ ∆( S ′ ) − ǫn + ǫn = ∆( S ′ ) − ǫn. Now, since ∆ is only non-zero on vertices v in , we have that ∆( S ′ ) = | L | + | S | . By construction ofset L, S, R , as well as the fact that every edge ( v in , v out ) has capacity 1 /φ , we also know that c ( E ( S ′ , V \ S ′ )) ≥ c ( E ( L, R )) + | S | /φ = κ ( E ( L, R )) + | S | /φ. Combining the above we have that k ( E ( L, R ))+ | S | φ ≤ k ( E ( L, R ))+ | S | φ −| S | ≤ c ( E ( S ′ , V \ S ′ )) −| S | = c ( E ( S ′ , V \ S ′ ))+ | L |− ∆( S ′ ) ≤ | L |− ǫn. | L | ≥ ǫn , as desired. We also have that | L | ≤ n/ | L | ≤ | S ′ | / ≤ ( | V ′ | / / n/ f in G ′ , which corresponds to a setof paths P in G . Let us prove that P satisfies all the properties of the lemma being proven except C min = 1 /d . ForProperty 2c, note that by Lemma B.8 we have that X P ∈P val( P ) = val( f ) ≥ ∆( V ′ ) − z ≥ | A | − ǫn ≥ | A | (1 − ǫ ) , (1)where the last inequality follows from the assumption of the lemma that | A | ≥ n/ P long = { P ∈ P | | P | ≥ h/ǫ } , and let P ′ = P \ P long . The algorithmreturns P ′ instead of P as the final path-set. Clearly, since P ′ ⊆ P , Properties 2a, 2d and 2econtinue to hold. Property 2b also holds by definition of P ′ and the fact that h = ˜ O ( φǫ ). All wehave left is to prove 2c for P ′ . By Equation 1 above, we have that X P ∈P ′ val( P ) ≥ | A | (1 − ǫ ) − X P ∈P long val ( P ) . We now complete the proof by showing that P P ∈P long val( P ) ≤ ǫn/ ≤ ǫ | A | . To see this, notethat Lemma B.8 guarantees that P P ∈P long | P | · val( P ) ≤ hn . But since | P | ≥ h/ǫ ∀ P ∈ P long wehave that P P ∈P long val( P ) ≤ hn h/ǫ = ǫn/
4, as desired.
Lemma 5.17.
There is an algorithm
Embed-Witness ( G, κ, φ, d ) with the following inputs: an m -edge n -vertex graph G = ( V, E ) , parameter φ ∈ (0 , / , a number d = O ( d avg ) where d avg is theaverage degree in G , and an edge-capacity function κ where κ ( E ) = b O ( n/φ ) and, for each e ∈ E , κ ( e ) ∈ [1 /d, /φ ] is a positive multiple of /d . In b O ( m/φ ) time the algorithm returns either1. a partition L, S, R of V where ǫ wit n ≤ | L | ≤ n/ and κ ( E ( L, R )) + | S | φ ≤ | L | , where ǫ wit =1 /n o (1) is a parameter we will refer to in other parts of the paper.2. A (weighted) O ( φ log( n )) -short-witness W of G and a corresponding embedding P , with thefollowing properties:(a) For every edge e ∈ E , P P ∈P e val( P ) = O ( κ ( e ) log( n )) where P e is the set of paths in P containing e .(b) | V ( W ) | = n − o ( n ) .(c) The total edge weight in W is O ( n log( n )) , and every edge weight is a multiple of /d .(d) There are only o ( n ) vertices in V ( W ) with weighted degree ≤ / .Proof. Although there a lot of technical details involved, conceptually speaking the lemma followsquite easily from the Cut Matching Game (Theorem 7.1) and
Embed-Matching (Lemma 5.16).Define R = O (log( n )) to be the maximum number of rounds in the cut-matching game. Recallthat ǫ wit = 1 /n o (1) is a parameter we set later.Now, we initiate the cut-matching game. The cut player from theorem 7.1 provides the terminalsets A i , B i at every round i . In round i, our algorithm will either return a sparse cut and terminateor embed matchings −→ M i and ←− M i . In particular, for each round i of the cut-matching game, thealgorithm runs Embed-Matching ( G, κ, A i , B i , φ, ǫ wit , d ) as well as19 mbed-Matching ( G, κ, B i , A i , φ, ǫ wit , d ). We focus on the first of these two invocations, as theyare symmetrical.If the subroutine Embed-Matching returns a cut (
L, S, R ), then our algorithm returns thesame cut and terminates. Lemma 5.16 directly guarantees the properties of (
L, S, R ) that we needin the lemma being proven.Now let us say that
Embed-Matching returns a path set P i . We turn this into a fractionalmatching M ∗ i from A to B in the natural way: for every path P ∈ P from a ∈ A to b ∈ B , weadd an edge from a to b of weight val( P ). By property 2a, the resulting matching is 1 /d -integral.The only issue is that Theorem 7.1 requires a matching of value | A | (a perfect matching), whileProperty 2c only guarantees a matching of value | A | (1 − ǫ wit ). We thus construct another 1 /d -integral matching F i ( F for fake) such that M ∗ i ∪ F i is a perfect matching. It is easy to constructsuch an F i in O ( nd ) = O ( m ) time by starting with M ∗ i and repeatedly adding edges of weight 1 /d from free vertices in A to free vertices in B until the matching is perfect. (Adding multiple copiesof the same edge corresponds to increasing the weight of that edge.) Note that we do not embedthese fake edges into G .If in any round i the subroutine Embed-Matching returns a cut, then the algorithm termi-nates. Thus the only case left to consider is when it return a path set P i at every step. Let M ∗ be the union of all the M ∗ i , including those in the reverse graph. Let F be the union of all the F i ,including those in the reverse graph. Let W ∗ = ( V, M ∗ ∪ F ). Theorem 7 . W ∗ is a α cmg = 1 /n o (1) expander. Note, however, that we cannot return W ∗ as our witness because thereis no path set corresponding to F (we never embedded the edges in F ). We also cannot simplyremove F as M ∗ on its own might not be an expander.Instead, we apply directed expander pruning from Theorem 6.1. Let W ∗ = ( V, M ∗ ∪ F ). Wewould like to apply pruning directly to W ∗ , but Theorem 6.1 only applies to unweighted graphs.Since the cut-matching game (Theorem 7.1) guarantees that all edge weights in W are multiplesof 1 /d , we can convert W ∗ to an equivalent unweighted multigraph W ∗ u in the natural way: everyedge e ∈ W ∗ is replaced by w ( e ) · d copies of an unweighted edge. Note that W ∗ has totalweight O ( n log( n )), because it contains O (log( n )) matchings; thus W u contains O ( nd log( n )) = O ( m log( n )) edges. We now apply directed pruning to W ∗ u , where we feed in all the edges in F asadversarial deletions; since the expansion of W ∗ u is at least α cmg = 1 /n o (1) , we can use Corollary6.2. Let P be the set returned by pruning, and set W u = W ∗ u [ V \ P ] and W = W ∗ [ V \ P ].We now show that W is a O ( φ log( n ))-witness with the desired properties. Let the pruningparameter L be determined by Corollary 6.2 (with α cmg as input variable φ ), and define γ = γ L ( α cmg ) = n o (1) , which is precisely the parameter from Theorem 6.1. By Theorem 6.1, theexpansion factor of W u , and hence of W , is at least 1 /γ = 1 /n o (1) , as desired. We now define ǫ wit = γ/ log ( n ). We know that each set F i has size at most 10 ǫ wit | A | ≤ ǫ wit n = nγ log ( n ) , so F has size at most O ( Rnγ log ( n ) ) = O ( nγ log( n ) ), where the last step follows from R = O (log( n )). ByTheorem 6.1 the pruned set P satisfies w ( P, V ) ≤ | F | · γ = O ( n/ log( n )) = o ( n ) . Recall that the cut-matching game (Theorem 7.1) guarantees that every vertex in W ∗ has weighteddegree at least 1; combined with the above bound on the volume of w ( P, V ), this proves Properties2b and 2d. Finally, Property 2c follows from the fact that W ⊆ W ∗ , and W ∗ is the union of O (log( n )) matchings.We must now show that W can be embedded into G . We use the embedding P W ⊂ P that isformed by taking all paths in P that start AND end in V \ P (note that the middle of the path maystill leave V \ P ). It is easy to check that every edge in W has a corresponding path in P W , and20hat the vertex/edge-congestion in P W is strictly smaller than in P . By Lemma 5.16, each P i hasedge-congestion φ , so since there are at most 2 R such P i (one in each direction per round of thecut-matching game, which has at most R rounds), P has a vertex-congestion of 2 Rφ = O ( φ log( n )).Similarly, the congestion on edge e is at most 2 Rκ ( e ) = O ( κ ( e ) log( n )), which proves Property 2a.Finally, we analyze the running time of the algorithm. Each call to Embed-Matching has arunning time of b O ( m/ ( ǫ wit φ )) = b O ( m/φ ); the algorithm makes O ( R ) = O (log( n )) calls, for a totalrun-time of b O ( m/φ ). In each round of the cut-matching game, the cut-player from Theorem 7.1requires b O ( nd ) = b O ( m ) time to compute the terminal sets A i , B i . The time to construct each F i is O ( nd ) = O ( nd avg ) = O ( m ). Finally, by Corollary 6.2, pruning requires b O ( m ) time. Robust-Witness (Algorithm 3)
Recall that φ ′ from Line 2 is the input to Algorithm Embed-Witness (Line 10).
Observation 5.18.
Throughout Algorithm 3, κ is non-decreasing and in particular only changesby doubling in Line 13. Moreover, if κ ( e ) < /φ ′ then κ ( e ) ≤ / (2 φ ′ ) , and we always have κ ( e ) ≤ /φ ′ ∀ e ∈ E ( G ) (here we use that fact that in Line 3 we set d so that d/φ ′ is a power of ). Definition 5.19 (Min-cost Embedding) . Define potential function Π(
G, κ ) as follows. Let d be theparameter from Line 3 of Algorithm 3, and recall that κ ( e ) ≥ /d ∀ e ∈ E ( G ). Let P be a collectionof all path sets P such that P embeds a φ -witness W into G for which | V ( W ) | ≥ (1 − ǫ wit / n and W is a α ex -expander. Define the cost of an edge e to be c ( e ) = log( dκ ( e )); note that since κ ( e ) ≥ /d , c ( e ) is always non-negative. For any path set P , define val( e ) = P P ∈P e val( P ),where P e is the set of paths going through e . Define c ( P ) = P e ∈ E c ( e )val( e ). Then, we defineΠ( G, κ ) = min P∈ P c ( P ), and we call the corresponding P the minimum cost embedding into G .If P = ∅ then Π( G, κ ) = ∞ .We now state a few simple observations Observation 5.20. If κ ( e ) increases for some edge e , then Π( G, κ ) cannot decrease as a result. Observation 5.21.
Let G = ( V, E ) and G ′ = ( V, E ′ ) an edge-subgraph with E ′ ⊂ E . Then, forany capacity function κ , Π( G, κ ) ≤ Π( G ′ , κ ) (they could both be infinite).Proof. Let P ′ be the minimum-cost embedding into G ′ . It is not hard to check that P ′ is also avalid embedding into G . Observation 5.22.
At the beginning of Algorithm 3, Π( G, κ ) = 0 (because for all e ∈ E , κ ( e ) = 1 /d so c ( e ) = 0 ). Moreover, Π( G, κ ) only increases throughout the course of the algorithm, and if Π( G, κ ) = ∞ then it will remain so forever (this follows from the observations above, as well asthe fact that G is decremental, so edges are never inserted). Observation 5.23. If Π( G, κ ) = ∞ then Certify-Witness ( G, φ, ǫ wit / from Line 7 of Algorithm3 returns a sparse cut and terminates. (Because Π( G, κ ) = ∞ means that P = ∅ , so there is novalid witness.) We have established that Π starts at 0 and only increases. We now show that as long as thealgorithm does not terminate, Π is never too large.
Lemma 5.24.
Consider any phase in which the algorithm did not terminate. Let G be the graphat the beginning of that phase (before any deletions have been processed in that phase), and let κ bethe capacities at the end of initialization for that phase (Line 14). Then Π( G, κ ) = ˜ O ( n/φ ) . lgorithm 3: Algorithm
Robust-Witness ( G = ( V , E ) , φ ) (see Theorem 4.3) Let n = | V | , m = | E | Initialize φ ′ = φα ex / log ( n ) // α ex = n o (1) is the parameter from Lemma 5.15 Set d to be the smallest number ≥ d avg such that d/φ ′ is a power of 2 // Note that d ∈ [ d avg , d avg ] Initialize G ← G Initialize κ ( e ) = 1 /d ∀ e ∈ E Procedure
Begin New Phase // execute before processing adversarial deletions Certify-Witness ( G, φ, ǫ wit / // ǫ wit is the parameter from Lemma 5.17 if existence of witness certified, then continue else return cut given by Certify-Witness and
Terminate Repeat Until
Embed-Witness ( G, κ, φ ′ , d ) returns a witness Let (
L, S, R ) be the vertex-cut returned by
Embed-Witness E ∗ ← { e ∈ E ( L, R ) | κ ( e ) < /φ ′ } // will show: κ ( e ) ≤ / (2 φ ′ ) ∀ e ∈ E ∗ κ ( e ) ← κ ( e ) for all e ∈ E ∗ Set W to be the witness returned by Embed-Witness ( G, κ, φ ′ , d ) and set P W to bethe corresponding embedding Create unweighted multi-graph W u as follows: V ( W u ) = V ( W ) and for every edge( u, v ) ∈ W add d · w ( u, v ) copies of edge ( u, v ) to W u . (Here, we use the fact that allweights in W are multiples of 1 /d ; See Lemma 5.17.) // W u is basicallyidentical to W ; we convert to an unweighted graph only so that we canapply pruning from Theorem 6.1 Initialize the pruning algorithm from Theorem 6.1 on W u Counter ← // Tracks volume of vertices are pruned from W . Procedure
Processing Deletion of edge ( u, v ) W ← W // W will always refer to the original witness returned inLine 14, before deletions are processed in this phase Let P ∗ contain all paths in P W that go through ( u, v ) Let E ∗ ⊆ E ( W ) contain the edges in W corresponding to P ∗ P W ← P W \ P ∗ ; E ( W ) ← E ( W ) \ E ∗ Input all copies of edges in E ∗ as adversarial deletions into the pruning algorithm on W u from Line 16. Let X contain the vertices in W u that were added to the pruned setas a result of these deletions Counter ← Counter + vol W ( X ) // tracks total volume pruned from W if Counter ≥ n/ then RESET PHASE: go back to Line 7 // Note: capacities κ are NOT resetbetween phases W ← W [ V ( W ) \ X ] 22 roof. Since the algorithm did not terminate in this phase,
Certify-Witness in Line 7 must havecertified the existence of some φ -witness W with embedding P . Note that this witness satisfies allthe properties in the definition of Π( G, κ ); thus, Π(
G, κ ) ≤ c ( P ). We complete the proof by showingthat c ( P ) = ˜ O ( n/φ ). Firstly, note that because P has vertex-congestion 1 /φ , P P ∈P | P | ≤ n/φ .Secondly, by Observation 5.18, for every edge e we always have c ( e ) ≤ log( dκ ( e )) ≤ log( d/φ ) = O (log( n )). We thus have c ( P ) = P P ∈P P e ∈ P c ( e ) = O (log( n ) P P ∈P | P | ) = O ( n log( n ) /φ ). Definition 5.25.
Let E be the edge set of the input graph to Algorithm Robust-Witness ( G , φ ),before any adversarial deletions. Note that even if e ∈ E is later deleted by the adversary, κ ( e ) isstill well-defined: κ ( e ) cannot increase after e is deleted, so it is equal to the capacity right before e is deleted. We can thus define κ ( E ) = P e ∈ E κ ( e ). Lemma 5.26.
Consider some invocation of
Embed-Witness ( G, κ, φ ′ , d ) in Line 10 of Algorithm3 that returns a cut ( L, S, R ) . Let κ be the capacity function before the doubling step in Line 13,and κ ′ the capacity function after the doubling step. Then, the following holds:1. κ ′ ( E ) ≤ κ ( E ) + n .2. Π( G, κ ′ ) ≥ Π( G, κ ) + n − o (1) .Proof. The first property is simple. By Lemma 5.17, we have κ ( E ( L, R )) ≤ | L | ≤ n . Since E ∗ ⊆ E ( L, R ) (see Line 12), and the algorithm doubles all capacities in E ∗ , we have that κ ′ ( E ) − κ ( E ) = κ ( E ∗ ) ≤ κ ( E ( L, R )) ≤ n , as desired.To prove the second property, note that since the algorithm did not terminate in Line 7, theremust exist some embedding P of a φ -witness W = ( V W , E W ) as in Lemma 5.15. In particular, W has expansion α ex and | V W | ≥ V − ǫ wit n/ . (2)To complete the proof, we now establish the following claim: Claim 5.27.
Let W = ( V W , E W ) be any witness satisfying the properties of the witness certifiedby Certify-Witness ( G, φ, ǫ wit / (Lemma 5.15), and let P be the corresponding embedding. Let L W = L ∩ W . Recall the set E ∗ from Line 12 and let P crit be the set of all paths in P that containat least one edge in E ∗ . Then, the following holds:1. | E W ( L W , R ∪ S ) | ≥ | L | α ex / .2. |P crit | = Ω( | L | α ex ) . Proof of First Claim Property:
Lemma 5.17 guarantees that | L | ≥ ǫ wit n . Combined withEquation 2 we have | L W | ≥ | L | − ǫ wit n/ ≥ | L | / . Since W is an expander, it contains no isolated vertices, so we clearly have | E W ( L W , V W ) | ≥ | L W | .Thus, by the expansion of W , | E W ( L W , R ∪ S ) | ≥ α ex | E W ( L W , V W ) | ≥ α ex | L W | ≥ α ex | L | / . (3)23 roof of Second Claim Property Let P be the embedding of W into G . Let E full = E ( L, R ) \ E ∗ = { e ∈ E ( L, R ) | κ ( e ) = 1 /φ ′ } . Note that E ( L, V \ L ) is the disjoint union of E ∗ , E full and E ( L, S ). Consider any path in P that corresponds to an edge in E W ( L W , V \ L W ) in W . We willcategorize these by the first edge on the path that goes from L to V \ L : if that edge is in E ∗ thenwe put P in P ∗ ; if that edge is in E full then we put P in P full ; and if that edge is in E ( L, S ) thenwe put P in P S . By the first property of this claim we have |P ∗ | + |P full | + |P S | ≥ α ex | L | / . Now, by Lemma 5.15 P has vertex-congestion 1 /φ (and hence edge congestion 1 /φ ), so |P full | ≤| E full | /φ and |P S | ≤ | S | /φ . But now, recall from Lemma 5.17 that κ ( E ( L, R )) + | S | / (2 φ ′ ) ≤ | L | .By definition of φ ′ in Line 2 of Algorithm 3 this implies |P S | ≤ Sφ = S φ ′ · α ex log ( n ) ≤ | L | α ex log ( n )Similarly, note that κ ( E ( L, R )) ≤ κ ( E full ) ≤ | E full | /φ ′ , so doing out the same algebra as abovewe have |P full | ≤ | E full | /φ ≤ | L | α ex log ( n ) . Combining the equations above we have |P ∗ | ≥ α ex | L | / − |P full | − |P S | = Ω( | L | α ex ) − o ( | L | α ex ) − o ( | L | α ex ) = Ω( | L | α ex )This completes the proof, as P ∗ ⊆ P crit , where P crit is the path set in the lemma statement. Back to Proof of Property 2 of Lemma 5.26
Let c be the cost function corresponding to κ and c ′ to κ ′ : so c ( e ) = log( dκ ( e )) and c ′ ( e ) = log( dκ ′ ( e )). Let P ′ be the min-cost embedding such that c ′ ( P ′ ) = Π( G, κ ′ ). Note that since P ′ is a valid embedding into G , we have that Π( G, κ ) ≤ c ( P ′ ).Now, observe that c ( e ) = c ′ ( e ) − e ∈ E ∗ and c ( e ) = c ′ ( e ) for all other edges. By the secondproperty of Claim 5.27 we know that P ′ contains at least | L | α ex paths that go through E ∗ . Wealso know from Lemma 5.17 that | L | ≥ nǫ wit . We thus have that the desired:Π( G, κ ′ ) = c ′ ( P ′ ) ≥ c ( P ′ ) + nα ex ǫ wit ≥ Π( G, κ ) + nα ex ǫ wit ≥ Π( G, κ ) + n − o (1) Corollary 5.28.
In any execution of Algorithm
Robust-Witness , the total number of times that
Embed-Witness in Line 10 returns a cut is b O (1 /φ ) .Proof. First we argue that whenever
Embed-Witness ( G, κ, ... ) is called, Π(
G, κ ) is finite. Firstly,note that κ only affects the magnitude of Π( G, κ ), not whether it is finite or infinite. Thus, ifΠ(
G, κ ) is finite the first time
Embed-Witness is called in a phase, it will be finite every time
Embed-Witness is called in that phase. Now, before running
Embed-Witness for the first timein a phase we always call
Certify-Witness in Line 7, and by Observation 5.23, if Π(
G, κ ) wereinfinite, then
Certify-Witness would return a sparse cut and terminate the entire algorithm.Thus, every time
Embed-Witness ( G, κ, ... ) is called, Π(
G, κ ) is finite, and by Lemma 5.26, itincreases by at least n o (1) . This completes the proof when combined with the fact that the potentialstarts at 0 and never decreases (Observation 5.22), and that if finite the potential is always b O ( n/φ )(Lemma 5.24). 24 orollary 5.29. Throughout the execution of Algorithm
Robust-Witness we have κ ( E ) = b O ( n/φ ) . Note that Lemma 5.17 requires this of the input capacity function κ , so this corollaryensures this input assumption is always valid. (Recall from Definition 5.25 that the upper boundcounts κ ( e ) for all edges e ∈ E , including those that were deleted from G .)Proof. κ only changes in Line 13, so the corollary follows directly from Property 1 of Lemma 5.26and Corollary 5.28. Lemma 5.30.
The total number of phases is at most b O (1 /φ ) Proof.
Recall that E is the original edge set of the graph. At any given time during the executionof the algorithm, let E del contain all edges that were deleted from E by the adversary. Note thatif e ∈ E del , then the algorithm will never increase it’s capacity, so κ ( e ) is the capacity of the edgeright before it was deleted.Consider potential function Φ del = P e ∈ E del κ ( e ). Clearly Φ del starts at time 0 and can onlyincrease. By Corollary 5.29, Φ del is always b O ( n/φ ). We now complete the proof by showing thatevery phase that does not terminate the algorithm increases Φ del by b Ω( n ).Consider any phase, and let W be the witness returned by Embed-Matching in that phase(Line 14) before any deletions have been processed in this phase, and let w be the edge-weightfunction for W . The witness W is then pruned as edges in G are deleted. (Although pruningis technically done through the intermediary of unweighted graph W u , we will conceive of it asapplying directly to the weighted version, as the two are equivalent.) Let K be the total capacityof all edges deleted from G by the adversary in this phase. By Property 2a of Lemma 5.17, thetotal weight of edges in E ∗ that are deleted from W (Line 21) is at most O ( K log( n )). All theseedges are then inputted as adversarial deletions to the pruning algorithm. Let P be the final set ofvertices pruned from W before the phase ends. By Theorem 6.1, we havevol W ( P ) = w ( E W ( P, V )) = K · log( n ) · n o (1) = b O ( K ) . Since P was the pruned set when the phase ended, we must have vol W ( P ) ≤ n/
50 (see 25).Combining with the above equation we get n = b O ( K ), so [increase in Φ del ] = K = ˆΩ( n ), asdesired. Correctness Analysis of Algorithm
Robust-Witness
We now prove that the algorithmsatisfies all the properties of Theorem 4.3. Recall that the algorithm maintains a witness until atsome point it terminates and returns a cut. A cut is only returned by
Certify-Witness (Line 7)and by Lemma 5.15, this cut is φn o (1) -vertex-sparse and (1 /n o (1) )-vertex-balanced, as desired.The algorithm only returns a witness via subroutine Embed-Witness (Line 10). Let W bethe witness returned, before deletions are processed in this phase. By Lemma 5.17, W clearlysatisfies all the properties of Theorem 4.3. W then undergoes pruning (Theorem 6.1) in Lines20-27. Let W denote the pruned witness. All the relevant properties of W remain the same underpruning except the expansion factor, the size of V ( W ), and the weighted degrees in W . Corollary6.2 guarantees that the expansion factor of W remains 1 /n o (1) . Letting P be the pruned set beforetermination, we know that vol W ( P ) ≤ n/
50 (Line 25). We know that W had n − n o (1) verticesof weighted degree ≥ / n/
50 volume is pruned away, there are still at least 9 n/
10 vertices in W and at most2 n/ ≤ | V ( W ) | /
10 of them have degree ≤ /
2, so W is a large φ witness, as desired.Theorem 6.1 also requires that the witness is decremental within each phase, which is clearlytrue because within a phase the witness changes only via pruning. Finally, Lemma 5.30 shows thatthe total number phases is b O ( n/φ ), as desired. 25 unning Time Analysis of Algorithm Robust-Witness
We now show that Algorithm3 has running time b O ( n/φ ), as required by Theorem 4.3. Since φ ′ = φ/n o (1) , the subrou-tines Embed-Matching and
Certify-Witness both require b O ( m/φ ) time. Since Lemma 5.16guarantees that the witness returned in Line 14 has expansion 1 /n o (1) , Corollary 6.2 guaranteesthat the total run-time of pruning within a single phase is b O ( m ). Each phase thus requires b O ( m/φ ) time, plus another b O ( m/φ ) time for every call to Embed-Matching that returns acut (since this can happen multiple times within a single phase). The total running time isthus b O ( m/φ ) · ([ Embed-Matching that return a cut]). ByLemma 5.30 and Corollary 5.28, both of those terms are b O (1 /φ ), so the total running time is b O (( m/φ ) · (1 /φ )) = b O ( m/φ ), as desired. In this section, we present the implementation and analysis of an pruning procedure for directedgraphs. Our main result of the section is summarized in the theorem below.
Theorem 6.1 (Directed Expander Pruning) . There is a deterministic algorithm with the followinginput: a directed unweighted decremental multi-graph W = ( V, E ) with n vertices and m edges thatis initially a φ -expander and a parameter L ≥ . The algorithm maintains an incremental set P ⊆ V ( W ) using ˜ O (cid:16) mn /L γ L ( φ ) (cid:17) total update time such that for P = V \ P , we have that W [ P ] is a γ L ( φ ) -expander and vol W ( P ) ≤ O (cid:16) tn /L γ L ( φ ) (cid:17) after t updates, where γ L ( φ ) = φ O ( L ) . To ease working with the theorem above, let us introduce the following corollary.
Corollary 6.2.
Say that the graph given in Theorem 6.1 is initially a φ -expander for /φ = n o (1) .Then, there exists a setting for L such that L = ω (1) and /γ L ( φ ) = 1 /φ O ( L ) = n o (1) . Note thatthe running time of Theorem 6.1 is then b O ( m ) .Proof of Corollary. We start by specifying the constant inside the big-O notation: say that 1 /γ L ( φ ) ≤ /φ cL for some constant c . Note that since 1 /φ = n o (1) we have log /φ ( n ) = ω (1). Now, set L = c · log log /φ ( n ) = ω (1). We have 3 cL = q log /φ ( n ). Thus 1 /φ cL = 1 /φ √ log φ ( n ) = n / √ log φ ( n ) ,which is n o (1) because log φ ( n ) = ω (1).The proof strategy for Theorem 6.1 follows on a high-level previous approaches (see for example[NS17, NSW17]): we first provide a simple pruning procedure that is given an expander W anda batch B of edges that where deleted from W and finds either a sparse cut in W \ B of sizeroughly | B | or certifies that W \ B ′ is still an expander where | B ′ | ≪ | B | which can then be appliedrecursively. We call this kind of procedure one-shot pruning and the algorithm and analysis of sucha procedure is the main result of Section 6.1. Using this sub-routine, we can then show how togive a dynamic pruning procedure . This reduction is described in Section 6.2 where we also proveTheorem 6.1. Let us begin the description of one-shot pruning by defining the concept of a near out-expanderand near expander, both natural generalizations of the definition of an expander.26 efinition 6.3 (Near Out-Expander) . Let G = ( V, E ) be a directed weighted graph. We say that A ⊆ V is a near φ -out-expander in G if ∀ S ⊂ A, vol G ( S ) ≤ vol G ( A ) / δ outG ( S ) ≥ φ vol G ( S ) . Definition 6.4 (Near Expander) . Let G = ( V, E ) be a directed weighted graph. We say that A ⊆ V is a near φ -expander in G if A is a near φ -out-expander in G and G (rev) .Given Definition 6.4, we can now state the guarantees of our one-shot pruning procedure. Lemma 6.5 (Large Sparse Cut or Almost Expander) . Given an unweighted multi-graph W =( V, E ) , a boundary P ⊆ V , and a core P = V \ P where we let the boundary edges be edges betweenboundary and core denoted by B = E W ( P, P ) ∪ E W ( P , P ) and have that E = E ( W [ P ]) ∪ B , i.e.the graph W consists of edges between vertices in the core and boundary edges. Further, given someconductance parameter φ ∈ (1 /n , such that P is a near φ -expander in W and the set of boundaryedges B has size at most φm/ .Then, there exists a deterministic algorithm that takes an integer z , and returns either1. a set B ′ ⊆ B of size at most z such that P is a near φ -expander in the graph W \ ( B \ B ′ ) ,or2. a set P ′ ⊆ P where φz/ < vol W ( P ′ ) ≤ vol W ( P ) / and min { δ outW [ P ] ( P ′ ) , δ inW [ P ] ( P ′ ) } ≤ φ · vol W ( P ′ ) . The algorithm has running time O (cid:16) | B | log nφ (cid:17) . Let us give such a deterministic algorithm that satisfies the guarantees stated above. Wetherefore start by setting up a flow problem Π out = (∆ out , T out , c out ) such that if the flow is feasible,we have that P is a near φ -out-expander in W as defined in Definition 6.3 and otherwise we obtaina cut P ′ as described in item 2.Before we set up Π out , let us define a slightly modified graph W out = ( V out , E out ) of W thatis more convenient to work with. Of utmost importance in our flow problem are the edges B out = E W ( P , P ) that is the edges leaving P . The graph W out differs from W in the B out edges which aremapped to distinct endpoints in the boundary and then reversed so that they can inject flow usingthese edges.More formally, we let P out be a set of vertices where there is a vertex associated with each edge in B out and let π be the bijective mapping from edges in B out to P out . We let R out be the set containingfor every edge ( u, v ) ∈ B out , the reversed edge after the head v was mapped to π ( u, v ), i.e. thevertex in P out associated with the edge ( u, v ). That is ( u, v ) ∈ B out if and only if ( π ( u, v ) , u ) ∈ R out .Finally, we can define the graph W out = ( V out = V ∪ P out , E out = ( E \ B out ) ∪ R out ).We can then set-up the flow problem Π out = (∆ out , T out , c out ) on the graph W out by setting∆ out ( u ) = ( /φ if u ∈ P out u ∈ P so that we have that all sources u of the flow problem are in the boundary P out contributing with1 /φ units of flow which gives in particular that ∆( V out ) = 4 · δ outW ( P ) /φ . We let the sink functionbe defined T out ( u ) = deg W out ( u ) = deg W ( u ) for all u ∈ P and otherwise 0, and define the capacity c out ( e ) = 24 /φ for each edge e ∈ E out . 27e then invoke Lemma B.7 on the problem Π out with z as given, ∆ = 4 /φ and h = ·
40 log nφ . Wehave that the constraint one the parameters in Lemma B.7 is satisfied since ∆( V out ) = 4 · δ outW ( P ) /φ as seen earlier and by our assumption that δ outW ( P ) + δ inW ( P ) ≤ φ vol W ( P ).Thus, in time O (cid:16) | B out | ) log nφ (cid:17) , we obtain either1. a pre-flow f with total excess at most z , or2. a cut S such that φz/ < vol W out ( S ) ≤ | E ( W out ) | / c ( E W out ( S, V \ S )) ≤ ∆ out ( S ) − T ( S ) − z + c out ( E W out ( S, V ) ∪ E W out ( V, S )) ·
40 log nh where we use that the total capacity isbounded by P e ∈ E out c out ( e ) < n .We now state two claims and show how they establish the lemma. We then prove these two claims. Claim 6.6.
If the algorithm ends with scenario 1, then we find a set of edges B ′′ ⊆ B out of size atmost z , such that P is a near φ -out-expander in W \ ( B out \ B ′′ ) . Claim 6.7.
If the algorithm ends with scenario 2, then we can find a set P ′ ⊆ P where φz/ < vol W ( P ′ ) ≤ vol W ( P ) / and δ outW [ P ] ( P ′ ) ≤ φ · vol W ( P ′ ) . Given the two claims, we obtain Lemma 6.5 almost as a corollary.
Proof of Lemma 6.5.
It is then not hard to see that if we run the above algorithm on W and W (rev) ,that we either have scenario 2 for at least one of the problems and therefore by Claim 6.7 can returna cut P ′ that satisfies the guarantees.Otherwise, both algorithms end in scenario 1 in which case by Claim 6.6, we have that P is anear φ -out-expander in W \ ( B out \ B ′′ ) for some set B ′′ and a near φ -out-expander in the reversegraph of W \ ( B in \ B ′′′ ) for some set B ′′′ where B in = E W ( P, P ). It is straight-forward to verifythat this implies that P is a near φ -expander in W \ ( B \ B ′ ) where B ′ = B ′′ ∪ B ′′′ and that B ′ isof size at most 2 z , so we can return B ′ . This establishes the lemma.It remains to prove the two claims. Without further due, let us give their proofs. Claim 6.6.
If the algorithm ends with scenario 1, then we find a set of edges B ′′ ⊆ B out of size atmost z , such that P is a near φ -out-expander in W \ ( B out \ B ′′ ) .Proof. The key ingredient of this claim is a simple insight: if the at most z boundary edges Z whichinduced the excess flow would not have existed, then f would be a feasible flow, certifying that P is a near- φ expander in the graph W \ ( B out \ Z ).Let us now prove this more formally: we have from Remark B.1 that the excess flow of the flowproblem Π out remains at the sources. Let S be the set of (source) vertices that have excess flow inΠ out and observe that S ⊆ P out by definition.Then, let us create a new flow problem Π ′ = (∆ ′ , T out , c out ) where we set ∆ ′ ( s ) for every vertex s in S to 0 but leave everything else as in Π out . Clearly, the flow f is now a feasible flow for Π ′ byconstruction. We construct B ′′ = π − ( S ).Finally, we prove that P is a near φ -out-expander in W \ ( B out \ B ′′ ) if f ′ is feasible bycontraposition. Let us therefore assume that P is not a near φ -expander in W \ ( B out \ B ′′ ) for28ny set B ′′ ⊆ B out . By Definition 6.3 there exists a cut P ′ ⊆ P , such that vol W \ ( B out \ B ′′ ) ( P ′ ) ≤ vol W \ ( B out \ B ′′ ) ( P ) / δ outW \ ( B out \ B ′′ ) ( P ′ ) < φ
24 vol W \ ( B out \ B ′′ ) ( P ′ ) . (4)However, we have by assumption of the lemma, that P is a near φ -expander in W and thereforewe have by Definition 6.4 that δ outW ( P ′ ) ≥ φ vol W ( P ′ ) . (5)But clearly, we have (cid:12)(cid:12)(cid:12) E ( P ′ , ( V ∪ P out ) \ P ′ ) ∩ ( B out \ B ′′ ) (cid:12)(cid:12)(cid:12) ≥ δ outW ( P ′ ) − δ outW \ ( B out \ B ′′ ) ( P ′ )and by the inequalities 4 and 5, we obtain δ outW ( P ′ ) − δ outW \ ( B out \ B ′′ ) ( P ′ ) > φ vol W ( P ′ ) − φ
24 vol W \ ( B out \ B ′′ ) ( P ′ ) ≥ (1 − φ
24 ) φ vol W ( P ′ ) > φ · vol W ( P ′ ) / . But since for each edge e in E ( P ′ , ( V ∪ P out ) \ P ′ ) ∩ ( B out \ B ′′ ), there is a vertex π ( e ) ∈ P out thatinduces 4 /φ units of flow into P ′ in the flow problem Π ′ , the total amount of flow that enters P ′ ismore than 2 · vol W ( P ′ ). However, the total sink capacity is vol W ( P ′ ) and the amount of flow thatcan be routed out of P ′ in Π ′ is bounded by X e ∈ E ( P ′ , ( V ∪ P out ) \ P ′ ) ∩ ( B out \ B ′′ ) c out ( e ) = δ outW \ ( B out \ B ′′ ) ( P ′ ) · /φ < vol W ( P ′ )where we use equation 4 in the last step. Thus, we derived a contradiction since the flow f cannotroute all flow entering P ′ to sinks in the flow problem Π ′ , but then f cannot be feasible.It remains to prove the second claim. Claim 6.7.
If the algorithm ends with scenario 2, then we can find a set P ′ ⊆ P where φz/ < vol W ( P ′ ) ≤ vol W ( P ) / and δ outW [ P ] ( P ′ ) ≤ φ · vol W ( P ′ ) . Proof.
Recall that the flow algorithm returns a cut S , with φz/ ≤ vol W out ( S ) ≤ | E ( W out ) | / c ( E W out ( S, V \ S )) ≤ ∆ out ( S ) − z + c out ( E W out ( S, V ) ∪ E W out ( V, S )) ·
40 log nh . (6)We let P ′ = S ∩ P . Then, δ W [ P ] ( P ′ ) = | E W [ P ] ( P ′ , P \ P ′ ) | ≤ | E W out ( S, V \ S ) | where the inequality follows since P ′ ⊆ S, P \ P ′ ⊆ V \ S and the fact that W and W out only differin the boundary edges. Further, by the setup of the flow problem Π out and equation 6, | E W out ( S, V \ S ) | = φ c ( E W out ( S, V \ S )) ≤ φ (cid:18) ∆ out ( S ) + c out ( E W out ( S, V ) ∪ E W out ( V, S )) ·
40 log nh (cid:19) . (7)29urther, ∆ out ( S ) = X s ∈ S ∩ P out /φ ≤ /φ · vol W out ( S ) (8)and we have c out ( E W out ( S, V ) ∪ E W out ( V, S )) ≤ /φ · vol W out ( S ) . (9)Using 8 and 9 in equation 7, we obtain that | E W out ( S, V \ S ) | ≤ φ (cid:18) /φ · vol W out ( S ) + 24 /φ · vol W out ( S ) ·
40 log nh (cid:19) = φ vol W out ( S ) / . This implies that a (1 − φ )-fraction of the edges incident to S are not in the cut ( S, V \ S ) andtherefore for P ′ = S ∩ P , we have vol W out ( P ′ ) ≥ − φ vol W out ( S ) ≥ vol W out ( S ) / S has at least one endpoint in P and therefore in P ′ . On closer inspection, it is nothard to verify that vol W ( P ′ ) ≥ vol W out ( P ′ ) since edges in the core are not changed, and no edgesare added in W out to the boundary but only some edges are reversed. Combined, we obtain thedesired inequality δ W [ P ] ( P ′ ) ≤ vol W ( P ′ ) . Since we have by the guarantees of the flow algorithm that φz/ ≤ vol W out ( S ), we further havethat vol W out ( P ′ ) > φz/ Using the sub-routine from last section, we can now give a straight-forward prove of Theorem 6.1which is restated for convenience.
Theorem 6.1 (Directed Expander Pruning) . There is a deterministic algorithm with the followinginput: a directed unweighted decremental multi-graph W = ( V, E ) with n vertices and m edges thatis initially a φ -expander and a parameter L ≥ . The algorithm maintains an incremental set P ⊆ V ( W ) using ˜ O (cid:16) mn /L γ L ( φ ) (cid:17) total update time such that for P = V \ P , we have that W [ P ] is a γ L ( φ ) -expander and vol W ( P ) ≤ O (cid:16) tn /L γ L ( φ ) (cid:17) after t updates, where γ L ( φ ) = φ O ( L ) . To prove the above theorem, let us start by giving an algorithm. In our algorithm, we have2 L + 3 levels, and for each level ℓ = 0 , , , . . . , L + 2 = L max , we maintain a set P ℓ ⊆ V andsets B ℓ , D ℓ ⊆ E (where E is the set of edges of W at stage 0). Each of these sets is initiallyempty. We also have a conductance parameter φ ℓ associated with each level ℓ which we define φ ℓ = ( φ/ Lmax − ℓ . For convenience, let us denote by X ≥ ℓ the union S j ≥ ℓ X j where X can be P , B or D and similarly for >, ≤ and < . We further assume for the rest of the section that n /L is aninteger. Algorithm.
Now, let us give a formal description. At every stage t where an edge ( u, v ) is deletedfrom W , we invoke the procedure DeletePruning ( e = ( u, v ) , t ) given in Line 4. In the algorithm,we first add the edge ( u, v ) to the set D ℓ for every ℓ ≥
0. We then find j , to be the largest index suchthat t is divisible by n j/L . We then add for all ℓ < j , P ℓ to P j and then set every P ℓ = D ℓ = ∅ . Wethen want to do one-shot pruning to reduce the number of edges in B ℓ ∪ D ℓ significantly. However,Lemma 6.5 requires that the graph one-shot pruning is executed upon has all edges that are dueto removal have to be in the boundary, we use a simple trick: we add a special vertex s to thegraph and split every edge ( u, v ) in B ℓ ∪ D ℓ into two edges ( u, s ) and ( s, v ). We use function π lgorithm 4: DeletePruning ( e, t ) Input:
The t th update to W , i.e. e is the edge that was deleted from W t − to derive W t . Output:
Recomputes the sets P ℓ to produce a new version of vertices that when pruned,leave an expander. for ℓ ≥ do Add e to D ℓ . Let j be the largest integer such that t is divisible by n ( j − /L . for ℓ < j do P j ← P j ∪ P ℓ B j ← B j ∪ B ℓ P ℓ ← ∅ ; B ℓ ← ∅ ; D ℓ ← ∅ ; for ℓ = j down to do repeat W ℓ ← (( V \ P ≥ ℓ ) ∪ { s } , E ( W [ V \ P ≥ ℓ ]) ∪ π out ( s, B ℓ ∪ D ℓ ) ∪ π in ( s, B ℓ ∪ D ℓ )). Run the algorithm from Lemma 6.5 on W ℓ with P = { s } and z = max { , n ( ℓ − /L − } and φ ℓ . if the algorithm returns a cut P ′ then if δ outW [ V \ P ≥ ℓ ] ( P ′ ) ≤ φ ℓ · vol W [ V \ P ≥ ℓ ] ( P ′ ) then // If P ′ is out-sparse. B ℓ ← B ℓ ∪ E W [ V \ P ≥ ℓ ] ( P ′ , ( V \ P ≥ ℓ ) \ P ′ ) else // If P ′ is in-sparse. B ℓ ← B ℓ ∪ E W [ V \ P ≥ ℓ ] (( V \ P ≥ ℓ ) \ P ′ , P ′ ) P ℓ ← P ℓ ∪ P ′ until the algorithm returned a set B ′ of edges Set B ℓ − to the set of edges in B ′ after the edges with tail in s where mapped by( π in ) − and the edges with head in s where mapped by ( π out ) −
31o denote this transform on a set of edges, i.e. π out ( s, E ′ ) = { ( u, s ) | ( u, v ) ∈ E ′ } and analogously π in ( s, E ′ ) = { ( s, v ) | ( u, v ) ∈ E ′ } . This gives us the special graph W ℓ of interest, defined by W ℓ = (cid:16) ( V \ P ≥ ℓ ) ∪ { s } , E ( W [ V \ P ≥ ℓ ]) ∪ π out ( s, B ℓ ∪ D ℓ ) ∪ π in ( s, B ℓ ∪ D ℓ ) (cid:17) . We then invoke the algorithm in Lemma 6.5 on W ℓ with boundary { s } , φ ℓ − and z = n ℓ/L / − P ′ in which case we add P ′ to P ℓ and E W ( P ≥ ℓ , V \ P ≥ ℓ )and in E W ( V \ P ≥ ℓ , P ≥ ℓ ) to B ℓ , update the graph W ℓ accordingly and rerun the pruning algorithm.When the algorithm returns a set of edges B ′ , we set B ℓ − to B ′ and return.Throughout the algorithm, we maintain P = P ≥ . Analysis.
We start the analysis by proving the following claim that establishes correctness of ouralgorithm.
Claim 6.8.
For every ℓ ≥ , at any stage t , after the for-loop starting in Line 7 finishes iteration ℓ + 1 , the set V \ P >ℓ is a near φ ℓ +1 -expander in W [ V \ P >ℓ ] ∪ B ℓ ∪ D ℓ and remains so for the restof the stage. Further, every invocation of the algorithm described in Lemma 6.5 in Line 10 occurswith valid parameters.Proof. Initially, we have that W is a φ -expander and since every set P ℓ is empty, we have that theinvariant is certainly satisfied after the initial stage.Let us now take the inductive step. We first observe that letting W t be the graph at the currentstage t , and W t − be the graph from the previous stage, then it is clear that since we added e toevery D ℓ that the invariant is still true after Line 1.Let j be as chosen in Line 4, then we have that for all levels ℓ > j , that the sets P ℓ remainunaffected by the algorithm. Additionally, for every ℓ ≥ j , B ℓ is monotonically increasing duringthe stage (in fact for ℓ > j it remains unchanged). It is not hard to see that thus the invariant forevery level ℓ ≥ j remains true. We also observe that for the first iteration of the for-loop in Line 7,we always correctly invoke the described in Lemma 6.5 with valid parameters since the invariantremains true for j .For levels 0 ≤ ℓ < j , observe that the relevant sets P ℓ , B ℓ and D ℓ are set to the empty set in thefor-loop starting in Line 3. Then, for each such level ℓ , there is a loop iteration ℓ + 1, where therepeat-loop leaves after certifying that V \ P ≥ ℓ +1 is a near φ ℓ -expander in W [ V \ P ≥ ℓ +1 ] ∪ B ′ . Thealgorithm then enters the if-case in Line 18 and sets B ℓ = B ′ thus the above invariant is certainlysatisfied for level ℓ . The for-loop iteration for ℓ again only adds edges to B ℓ so the claim remainstrue for the rest of the algorithm and in particular every time the Line 10 is entered, thus thealgorithm described in Lemma 6.5 is invoked with valid parameters.In order to establish efficient running time, it is crucial to show that the sets P ℓ for every level ℓ are sparse cuts. We therefore first prove this invariant which roughly establishes that no vertexin P ℓ is strongly-connected to a vertex that is outside the set. Invariant 6.9.
For any i ≥ , at the end of any stage t and after any for and repeat-loop iterationin Line 4, we have that1. P i ⊆ ( V \ P >i ) , and2. B i is a subset of the edges incident to at least one vertex in P i , and . there exists a partition of P i into sets P outi and P ini such that E W [ V \ P >i ] \ B i ( P outi , V \ ( P >i ∪ P outi )) = E W [ V \ P >i ] \ B i ( V \ ( P >i ∪ P ini ) , P ini ) = ∅ Additionally, after every iteration for index i of the for-loop starting in Line 3, we have that thesets P i and B i for ℓ ≤ i < j are empty.Proof. Properies 1 and 2, are straight-forward to verify from the algorithm. Let us therefore focuson Property 3, which we prove by induction on the repeat-loop iterations.In the base case, i.e. before the first execution of algorithm
DeletePruning ( e, t ), we havethat sets P ℓ are initialized to the empty sets, so Invariant 6.9 is vacuously true after stage 0.Let us now take the inductive step. Let us start by analyzing the for-loop starting in Line 3.Let us focus on the loop iteration for ℓ = i . Here, we have that since Invariant 6.9 was satisfied atthe start of the loop, we can partition P j into P outj and P inj with the properties described above.Similarly, we can do the same for P i which is partitioned into P outi and P ini . Now, let us prove thatProperty 3 holds for P out = P outj ∪ P outi and P in = P inj ∪ P ini in the graph W [ V \ P >j ] \ ( B j ∪ B i ).Now, for the sake of contradiction, let us assume that there is some edge leaving P out in thegraph. We certainly have that the edge cannot leave a vertex in P outj , since P outj has no out-goingedges in the graph W [ V \ P >j ] \ B j ⊇ W [ V \ P >j ] \ ( B j ∪ B i ). Thus, the vertex with a leaving edgehas to be in P outi . But there are no edges leaving P outi in W [ V \ P >i ] \ B i ⊆ W [ V \ P >i ] \ ( B j ∪ B i ).But since P i ′ are empty for i < i ′ < j , we have that the edge must enter a vertex in P j , and in orderto be in the cut, it can only be in P inj . But we have that E W [ V \ P >j ] \ B j ( V \ ( P >j ∪ P inj ) , P inj ) = ∅ ,thus we derive a contradiction. A similar argument establishes the claim for P in . Thus, at the endof the for-loop, P j and B j satisfy the invariant.To prove the second statement, we simply observe that the sets P i ′ and B i ′ where not touchedfor indices i < i ′ < j and the sets for ℓ = i are explicitly set to the empty set in the loop iteration.For the for-loop starting in Line 7, let us consider an iteration ℓ and take the inductive step. Wehave by our claim that after every repeat-loop ends, the Invariant 6.9 holds at that step. Further,we know by the first statement of the claim, that B ℓ − is empty before the for-loop enters the ifstatement in Line 18 and adding edges to B ℓ − can not violate the Invariant.For the repeat-loop starting in Line 8, we have that every time the algorithm Lemma 6.5computes a cut P ′ , we either add all out-edges or in-edges of P ′ in W [ V \ P ≥ ℓ ] to B ℓ so reusing theargument an almost identical argument as for the for-loop starting in Line 3, we can again obtainthat the Invariant remains satisfied, even though we add P ′ to P ℓ . This completes the proof.Next, let us prove a simple claim, that holds a useful corollary. Claim 6.10.
Whenever the algorithm enters Line 13 then B ℓ ∪ E W [ V \ P ≥ ℓ ] ( P ′ , ( V \ P ≥ ℓ ) \ P ′ ) ⊆ B ℓ ∪ E W ℓ [ V \ P ≥ ℓ ] ( P ′ , ( V \ P ≥ ℓ ) \ P ′ ) . An analogous claim holds for Line 15.Proof.
The cut E W ℓ [ V \ P ≥ ℓ ] ( P ′ , ( V \ P ≥ ℓ ) \ P ′ ) clearly contains all edges in the cut E W [ V \ P ≥ ℓ ] ( P ′ , ( V \ P ≥ ℓ ) \ P ′ ) but for the edges whose endpoints where mapped to s by the functions π out and π in sincethe vertex s is excluded in the induced graphs considered above. But this implies that B ℓ ∪ D ℓ ⊆ (cid:16) E W [ V \ P ≥ ℓ ] ( P ′ , ( V \ P ≥ ℓ ) \ P ′ ) (cid:17) \ (cid:16) E W ℓ [ V \ P ≥ ℓ ] ( P ′ , ( V \ P ≥ ℓ ) \ P ′ ) (cid:17) . W refers to the current graph, and D ℓ is a subset of edge deletions to the graph W up to the current stage, we have that we have in fact B ℓ ⊆ (cid:16) E W [ V \ P ≥ ℓ ] ( P ′ , ( V \ P ≥ ℓ ) \ P ′ ) (cid:17) \ (cid:16) E W ℓ [ V \ P ≥ ℓ ] ( P ′ , ( V \ P ≥ ℓ ) \ P ′ ) (cid:17) . But since we consider the sets including the union with B ℓ , the claim follows. Corollary 6.11.
We augment the set B ℓ in Line 13 and Line 15 by at most φ ℓ · vol W [ V \ P ≥ ℓ ] ( P ′ ) edges.Proof. This follows straight-forwardly from the guarantee of the algorithm of Lemma 6.5 combinedwith the insight that a selected cut in the graph W is even smaller than in the graph W ℓ that thealgorithm was invoked upon by Claim 6.10.Next, let us argue about the size of the sets B i and D i . We establish the following invariant. Invariant 6.12.
At the end of any stage t , for any i ≥ , we have t ′ = t mod n i/L and t ′′ = ⌊ t ′ /n ( i − /L ⌋ , we have that | D i | ≤ t ′ | B i | ≤ i (cid:16) n i/L − t ′′ · n ( i − /L (cid:17) + φ i vol W [ V \ P >i ] ∪ D i ( P i ) . In particular, we have, | D i | < n i/L and | B i | < i (3 n i/L ) + φ i +1 vol W [ V \ P >i ] ∪ D i ( P i ) .Proof. Let us prove the invariant by induction on the stage t .• Base case t = 0: Observe that the invariant is initially satsified since all sets D i and B i areinitialized to the empty set.• Inductive step t − t, t >
0: Let us conduct a case analysis for the sets D i and B i for alevel i . We distinguish by the following cases: – t is not divisible by n ( i − /L : Then, we have that j < i in Line 4. The algorithm thereforesimply increases the set D i by a single edge in Line 1 and no further affects any of thesets. Observe that when t is not divisible by n ( i − /L then, t ′′ did not change since thelast stage, and therefore all remaining bounds still hold. – t is divisible by n ( i − /L but not by n i/L : In this case, we have that j = i is chosen inLine 4. Observe that in this case t ′ increases by one and as before we add a single edgeto D i and leave D i untouched for the rest of the algorithm. However, t ′′ is increased byone from the last stage, so we have at the beginning of the stage | B i | ≤ i (cid:16) n i/L − t ′′ − · n ( i − /L (cid:17) + φ i vol W [ V \ P >i ] ∪ D i ( P i )by the induction hypothesis.Next observe that in the for-loop starting in Line 3, we add all B ℓ for ℓ < j = i , to B i .However, by the induction hypothesis on the last stage and the insight that the sets B ℓ remain unchanged until this point in the algorithm, we conclude that B i is increased byat most X ℓℓ ] ∪ D ℓ ( P ℓ ) < i ( n ( i − /L ) + X ℓℓ ] ∪ D ℓ ( P ℓ ))34nd since all P ℓ are disjoint from P i and pairwise disjoint, we have that after the for-loopterminates, we have that | B i | ≤ i (cid:16) n i/L − t ′′ · n ( i − /L (cid:17) + φ i vol W [ V \ P >i ] ∪ D i ( P i )i.e. the invariant is satisfied.Finally, for the rest of the algorithm, B i is only changed in the first iteration of the for-loop starting in Line 7 where whenever some edges are added to B i , by Corollary 6.11, P i increases significantly so that the right-hand side of the equation remains largerthroughout. – t is divisible by n i/L : In this case, we have that since t = n i/L , that we choose j > i inLine 4. Thus, we enter the for-loop starting in Line 3 with ℓ = i , and set D i , B i and P i tothe empty set. Since the algorithm does not revisit the set D i afterwards, the invariantfollows for D i . For the remaining two sets, two iterations of the for-loop starting inLine 7 are relevant: the iteration where ℓ = i + 1 and the iteration where ℓ = i . Inthe former iteration, the algorithm invokes repeatedly the algorithm from Lemma 6.5and only leaves the repeat-loop once it finds a set of size B ′ of size at most 2 z where z = max { , n ( i − /L − } . It is not hard to verify that the invariant is thus satisfied atthis point. The for loop with ℓ = i ensures by Corollary 6.11 that the invariant remainsenforced.This exhausts all cases, and thereby concludes the proof.Using this invariant, we can further derive a straight-forward upper bound on the size of P i . Claim 6.13.
Throughout the algorithm, for any level i , we have vol W [ V \ P >i ] ∪ D i ( P i ) ≤ i +2 n i/L − φ i . Proof.
Let us assume that, for the sake of contradiction, at some point of the algorithm, duringsome stage t , for some i ≥
0, we havevol W [ V \ P >i ] ∪ D i ( P i ) > i +2 ( n i/L − φ i . We observe first that P i is increased in size only in Line 16 and after the violation has occurred,the set P i is only further increased while the sets B i and D i remain unchanged.By Invariant 6.9, at the end of the stage, we thus have that we can find P outi and P ini to forma partition of P i such that E W [ V \ P >i ] \ B i ( P outi , V \ ( P >i ∪ P outi )) = ∅ . (10)Now, let us assume that vol W [ V \ P >i ] ∪ D i ( P outi ) ≥ vol W [ V \ P >i ] ∪ D i ( P ini ). And further, let us observethat, at the end of each stage, by Claim 6.8, the set V \ P >i is a near φ i +1 -expander in W [ V \ P >i ] ∪ B i ∪ D i . Thus, by Definition 6.4, we have that | E W [ V \ P >i ] ∪ B i ∪ D i ( P outi , V \ ( P >i ∪ P outi )) | ≥ φ i +1 · vol W [ V \ P >i ] ∪ D i ( P outi ) (11)35ut equations 10 and 11 imply that B i ∪ D i is of size at least φ i +1 · vol W [ V \ P >i ] ∪ D i ( P outi ).However, by Invariant 6.12, we have for i = 0, that B i ∪ D i is of size 0 which gives a contradictionand for i >
0, that at the end of the stage, the size is bounded by | B i ∪ D i | ≤ i n i/L + φ i vol W [ V \ P >i ] ∪ D i ( P i ) ≤ φ i W [ V \ P >i ] ∪ D i ( P i )where we use in the last inequality that vol W [ V \ P >i ] ∪ D i ( P i ) ≥ i +1 n i/L φ i .But, since φ i < φ i +1 /
96, we have that | B i ∪ D i | ≤ φ i W [ V \ P >i ] ∪ D i ( P i ) < φ i +1 · vol W [ V \ P >i ] ∪ D i ( P i ) ≤ φ i +1 · vol W [ V \ P >i ] ∪ D i ( P outi )Thus, we have derived a contradiction on the size of the set B i ∪ D i . The case where vol W [ V \ P >i ] ∪ D i ( P outi ) < vol W [ V \ P >i ] ∪ D i ( P ini ) can be established analogously.Finally, we can prove Theorem 6.1 which is restated below for convenience. Theorem 6.1 (Directed Expander Pruning) . There is a deterministic algorithm with the followinginput: a directed unweighted decremental multi-graph W = ( V, E ) with n vertices and m edges thatis initially a φ -expander and a parameter L ≥ . The algorithm maintains an incremental set P ⊆ V ( W ) using ˜ O (cid:16) mn /L γ L ( φ ) (cid:17) total update time such that for P = V \ P , we have that W [ P ] is a γ L ( φ ) -expander and vol W ( P ) ≤ O (cid:16) tn /L γ L ( φ ) (cid:17) after t updates, where γ L ( φ ) = φ O ( L ) .Proof. We have correctness of the algorithm, following from Claim 6.8 and Invariant 6.12, wherethe former states that after each stage V \ P >ℓ is a near φ ℓ +1 -expander in W [ V \ P >ℓ ] ∪ B ℓ ∪ D ℓ . Soin particular, for ℓ = 0, we have V \ P > is a near φ -expander in W [ V \ P > ] ∪ B ∪ D where thelatter states that B and D are empty sets. Thus, W [ V \ P > ] is a φ ℓ +1 -expander and thereforecertainly a φ -expander.For the running time, we observe that the invocations of the algorithm from Lemma 6.5 dominatethe costs of the for-loop starting in Line 7. This follows since we can construct W ℓ straight-forwardlyfrom W using the same running time as the algorithm from Lemma 6.5 and afterwards, updatingsets P ℓ , B ℓ and B ℓ − can easily be done in the time that the algorithm requires to output these sets.The running time outside of the for-loop can be at most factor L max larger than the time spent inthe loop (plus m ) since we move every item in a set P ℓ , B ℓ eventually to a higher level. But thereare at most L max levels.Therefore, let us bound the running time of the for-loop iterations. Let us fix a level ℓ and focuson the total time spend in the for-loop on iterations ℓ .We observe that the sets P ℓ is monotonically increasing between stages that are divisible by n ℓ/L but bounded in size by Claim 6.13. But every time the algorithm from Lemma 6.5 runs and finds acut P ′ , we add Ω( φ ℓ n ( ℓ − /L ) to the volume of P ℓ . Thus, there can be at most m/n ℓ/L · ℓ +2 n ℓ/L /φ ℓ Ω( φ ℓ n ( ℓ − /L ) = O ( mφ ℓ n ( ℓ − /L ) invocations of the algorithm where a cut P ′ is reported. On the other hand, since weenter the for-loop for ℓ only every n ( ℓ − /L ) iterations, there can also only be a total of O ( m/n ( ℓ − /L )invocations ending in a set of edges B ′ since we leave the repeat-loop once such a set is obtained.We further observe that every invocation of the algorithm runs in time O ( | B ℓ ∪ D ℓ | /φ ℓ ) sincethere are at most two boundary edges for every edge in | B i ∪ D i | . But by Invariant 6.12 andClaim 6.13, we have that B ℓ ∪ D ℓ never exceeds size O (6 ℓ n ℓ/L φ ℓ ). Thus, each invocation runs in time36 ( mn /L φ ℓ ℓ ). The total running time follows now straight-forwardly by summing over the levels,multiplying by factor L max and setting φ ℓ .Finally, to prove the claim on the volume of P , let j ′ be the smallest index such that t < n ( j ′ − /L .Then, we observe that in algorithm Line 4, we have never chosen j ≥ j ′ in the previous or currentstage. But this implies that every set P j ′′ for j ′′ ≥ j ′ has not been changed since initialization ofthe algorithm. Thus, the total size of P = P ≥ can be upper bound by this insight and Claim 6.13by X j 4, then the matching player chooses twodirected (fractional) perfect matchings −→ M i and ←− M i that match vertices from A i to B i and back.Then, we set W ← W ∪ −→ M i ∪ ←− M i and proceed with round i + 1. We call this process a cut-matchinggame .For any number d ≥ 1, we say that an edge is 1 /d -integral if its weight is a non-negative multipleof 1 /d . A fractional matching or a graph is 1 /d -integral if it consists of only 1 /d -integral edges. Theorem 7.1 (Deterministic Cut-matching Game for Directed Graphs) . Suppose that, for every i , −→ M i and ←− M i are /d -integral for some integer d ≥ . There is a deterministic algorithm forthe cut player that takes b O ( nd ) time to output each ( A i , B i ) in the cut-matching game such thatafter R = O (log n ) rounds W = ( −→ M ∪ ←− M ) ∪ · · · ∪ ( −→ M R ∪ ←− M R ) must be a α cmg -expander, where α cmg = 1 /n o (1) is a parameter we will refer to it other parts of the paper. Moreover, the weightedin-degree and out-degree of each vertex in W is at least . Theorem 7.1 is proved by extending the fast deterministic cut-matching game in undirectedgraphs by Chuzhoy et. al [CGL + + 20] can be generalized to directed graphs. The only crucial new ingredient is in the analysisabout entropy function.We review the previous work on the cut-matching game below. The framework was first intro-duced by Khandekar, Rao and Vazirani [KRV09] and has been used in numerous algorithms for com-puting sparse cuts [KRV09, NS17, SW19, GLN + ] and beyond (e.g. [CC13, RST14, CC16, CL16]).There is also a line of works which focuses on the quality of the cut-matching game itself (i.e. theguarantee of the cut player) and describe our contribution. For simplicity, we assume that d = 1.• (Undirected Matching Player, Randomized Cut Player): The first work is by Khan-dekar, Rao and Vazirani [KRV09]. They require the matching player to choose an undirected perfect matching M i at each round i . Then, they show a randomized algorithm for the cutplayer that takes O ( n log n ) time in each round i to output ( A i , B i ) and guarantees thatafter R = O (log n ) rounds, Ψ( W ) ≥ Ω(1) and so Φ( W ) ≥ Ω(1 / log n ). Then, Orecchiaet. al [OSVV08] show a slower randomized algorithm which takes ˜ O ( n ) time per round butafter R = O (log n ) rounds, they improve the sparsity guarantee to Ψ( W ) ≥ Ω(log n ).37 (Directed Matching Player, Randomized Cut Player): Louis [Lou10] generalizes theresult by [KRV09] and shows that even when the matching players give two directed perfectmatchings −→ M i and ←− M i , there is a randomized algorithm for the cut player with same guaranteeas in [KRV09]. As every undirected matching M i can be thought as two directed matchings −→ M i and ←− M i such that ( u, v ) ∈ −→ M i iff ( v, u ) ∈ ←− M i , this setting of directed matchings is a strictgeneralization.• (Undirected Matching Player, Deterministic Cut Player): In the attempt to reducethe number of O (log n ) rounds, Khandekar et. al [KKOV07] show that, when the matchingplayer chooses an undirected perfect matching M i at each round i , there is a deterministicexponential-time algorithm for the cut player (by simply finding a sparsest cut in W i − ).Then, after R = O (log n ) rounds, they guarantee Ψ( W ) ≥ Ω(1). The novel component of thiswork is the potential analysis based on entropy . Later, it is observed in [GLN + ] that findingapproximate sparsest cuts also works: they show a deterministic ˜ O ( n )-time algorithm forthe cut player where Ψ( W ) ≥ / log O (1) n after R = O (log n ) rounds. Finally, Chuzhoy et. al[CGL + 20] give a deterministic b O ( n )-time algorithm for the cut player where Ψ( W ) ≥ /n o (1) after R = O (log n ) rounds. This in turns imply a wide range of applications in undirectedgraphs. We note that both [GLN + , CGL + 20] use the same potential analysis based on entropy.We can see that, in contrast to Theorem 7.1, all previous cut-player algorithms either are random-ized, or require undirected matchings, or both. We describe our cut-player algorithm in Section 7.1.The idea for proving Theorem 7.1 is by generalizing two components of the previous works, andthen combining the two.First, we generalize the deterministic b O ( n )-time implementation of Chuzhoy et. al [CGL + + 20] was stated forundirected graphs, most of the tools from [CGL + 20] readily generalizes to directed graphs. Wesketch how to do this in Section 7.3.Second, we generalize the potential analysis based on entropy by Khandekar et. al [KKOV07]to work with directed matchings. Although the idea is similar, our analysis is more involved. At avery high level, the reason is that, while each undirected matching M i can be viewed as a collectionof directed cycles of length 2 (and hence a directed calculation by hand is possible), the union twodirected matchings of −→ M i ∪ ←− M i can be a collection of directed cycles of arbitrary length . The detailof our analysis is shown in Section 7.2. Preliminaries about Sparsity of Cuts. In this section, it is more convenient to work with thenotion of sparsity instead of conductance . Sparsity measures expansion of a cut like conductancebut, for sparsity, we compare the cut size to the number of vertices in the cut. Definition 7.2 (Sparsity) . A directed weighted graph G = ( V, E ) has sparsity Ψ( G ) ≥ ψ if, forany set S ⊂ V where | S | ≤ | V \ S | , min { δ in ( S ) , δ out ( S ) } ≥ ψ | S | . The sparsity of a cut ( S, V \ S ) isΨ G ( S ) = min { δ in ( S ) , δ out ( S ) } / min {| S | , | V \ S |} .Note that, in the graph with maximum weighted degree d , we have Φ( G ) ≤ Ψ( G ) ≤ d · Φ( G ).Also, Ψ( H ) ≤ Ψ( G ) for any subgraph H of G . To describe the algorithm of the cut player for Theorem 7.1, we need the following subroutine:38 heorem 7.3. There is a deterministic algorithm, that we call CutOrCertify , that, given adirected n -vertex /d -integral graph G = ( V, E ) and maximum weighted degree O (log n ) , returnsone of the following: • either a cut ( A, B ) in G such that | A | , | B | ≥ n/ and w ( E G ( A, B )) ≤ n/ ; or • a subset S ⊂ V of at least n/ vertices and Ψ( G [ S ]) ≥ /γ .The running time of the algorithm is O ( ndγ ) where γ = n o (1) . As this subroutine is the generalization of Theorem 1.5 of [CGL + 20] to directed weighted graphsand almost all tools are readily generalized, we only sketch the proof for completeness in Section 7.3.Now, we describe the algorithm of the cut player for Theorem 7.1 which is a generalization of thealgorithm in [KKOV07] to directed graphs. Initialize W = ∅ as an n -vertex empty graph. Startingfrom i = 1. While the algorithm CutOrCertify running on W i − returns the cut ( A, B ) where w ( E W i − ( A, B )) ≤ n/ 100 and | A | , | B | ≥ n/ 10, we do the following. Let A i and B i be arbitrarysubsets where | A i | = | B i | ≥ n/ A i , B i ) does not cross ( A, B ) (i.e. either A ⊆ A i or B ⊆ B i ).Then, the matching player gives us two directed 1 /d -integral perfect matchings −→ M i and ←− M i thatmatches vertices from A i to B i and back. Then, W i ← W i − ∪ −→ M i ∪ ←− M i . This finishes the round i .Then, we set i ← i + 1.Otherwise, CutOrCertify returns a subset S ⊆ V of at least n/ W i − [ S ]) ≥ /γ . Now, we call the last round. Let T ⊆ V \ S be an arbitrary set where | T | = | S | . The cutplayer chooses A i and B i by setting ( A i , B i ) ← ( S, T ). Then, the matching player again gives usthe perfect matchings −→ M i and ←− M i . Finally, set W i ← W i − ∪ −→ M i ∪ ←− M i and terminate. Let W = W i denote the graph after the last iteration.Now, we are ready to prove Theorem 7.1. First, we bound the number of rounds: Lemma 7.4. There are at most O (log n ) rounds in the above process. The proof of Lemma 7.4 is the main contribution of this section and is shown later in Section 7.2.Next, we claim that after the process is terminated, then Ψ( W ) ≥ Ω(1 /γ ). This follows becauseΨ( W ) ≥ Ψ( W i − [ S ] ∪ −→ M i ∪ ←− M i ) ≥ Ω(1 /γ ) where the last inequality is by the following observation(which is a generalization of Observation 2.3 in [CGL + Proposition 7.5. Let G = ( V, E ) be an n -vertex (weighted) graph where Ψ( G ) ≥ ψ , and let G ′ beanother graph that is obtained from G by adding to it a new set V ′ of at most n vertices, and twoperfect (fractional) matching −→ M and ←− M , matching vertices from V ′ to another set V ′′ ⊆ V and viceversa where | V ′′ | = | V ′ | . Then Ψ( G ′ ) = Ω( ψ ) . As the weighted degree of each vertex in W is at most O (log n ), we have that Φ( W ) ≥ Ψ( W ) /O (log n ) ≥ Ω(1 /γ log n ) = 1 /n o (1) . Observe further that the weighted in-degree and out-degree of each vertex in W is at least 1. To see this, consider W i − before the last round. Observethat weighted in-degree and out-degree of each vertex is integral, because W i − is a union of perfectmatchings. However, if a vertex u has either zero in-degree or out-degree in W i − , then u can notbe in the set S where Ψ( W i − [ S ]) ≥ /γ . But, the perfect matching −→ M i and ←− M i in the last roundmust contribute exactly 1 to both the weighted in-degree and out-degree of u .Therefore, we conclude that, in each round, the cut player takes O ( ndγ ) = b O ( nd ) time. After O (log n ) rounds, W is a 1 /n o (1) -expander and each vertex in W has weighted in-degree or out-degreeat least 1. This completes the proof of Theorem 7.1.39 .2 Bounding the Number of Rounds We prove Lemma 7.4 in this section. Consider the following process. Initially, each vertex u has aunit of mass initialized at u itself.At round i , we are given the 1 /d -integral perfect matchings −→ M i and ←− M i . Observe that −→ M i isthe average of exactly d integral perfect matching −→ M i, , . . . , −→ M i,d . Similarly, ←− M i is the average of ←− M i, , . . . , ←− M i,d . Let D i be a uniformly random number from { , . . . , d } . The mass on each vertexis distributed as follows:• For each u ∈ A ∪ B , 1 / u stays at u and 1 / u is sent to v where ( u, v ) the unique outgoing edge of u in −→ M i,D i ∪ ←− M i,D i .• For each u ∈ V \ ( A ∪ B ), all of the mass at u stays at u .Observe that, at round i , the mass is moved only between A i and B i and there are exactly 1unit of mass on every vertex after each round. Let p i ( u, v ) denote the expected mass that startsfrom u and ends at v after the i -th round. From the above process, we have that p ( u, u ) = 1for all u ∈ V and p ( u, v ) = 0 for all u = v . Observe that 0 ≤ p i ( u, v ) ≤ u, v, i , and P v ∈ V p i ( u, v ) = 1, P u ∈ V p i ( u, v ) = 1.Let −→ P i ( u ) denote the random variable where Pr[ −→ P i ( u ) = v ] = p i ( u, v ) for all v ∈ V , i.e., thedistribution of −→ P i ( u ) is the distribution of mass starting from u after the i -th round. Similarly,let ←− P i ( v ) denote the random variable where Pr[ ←− P i ( v ) = u ] = p i ( u, v ) for all v ∈ V . That is,the distribution of ←− P i ( v ) is the distribution of mass of each vertex that ends at v after the i -thround. For any distribution X = ( x , . . . , x n ) where p ( x ) = Pr[ X = x ], the entropy of X is H ( X ) = P x p ( x ) log p ( x ) . The potential after round i is defined asΦ i = X u ∈ V H ( −→ P i ( u )) + H ( ←− P i ( u )) . From the definition of entropy, observe the following simple fact: Proposition 7.6. Φ = 0 and Φ i ≤ O ( n log n ) for all i . Our main goal is to show that after each round i , we have Φ i ≥ Φ i − + Ω( n ). So there can beonly O (log n ) rounds. We will show that this is true even if D i is fixed. We formalize this below.Let Z be a random variable. The entropy of X conditioned on the value of Z = z is defined as H ( X | Z = z ) = P x p ( x | z ) log p ( x | z ) . It is well-known that fixing some random variable neverincreases the entropy: Fact 7.7. H ( X | Z = z ) ≤ H ( X )Let Φ i,z = P u ∈ V H ( −→ P i ( u ) | D i = z ) + H ( ←− P i ( u ) | D i = z ) . As Φ i,z ≤ Φ i by the above fact,we can bound the number of rounds to be O (log n ), proving Lemma 7.4, once we can prove thefollowing: Lemma 7.8. Φ i,z ≥ Φ i − + Ω( n ) for any z ∈ [ d ] . As our goal is to lower bound Φ i,z for every z , from now on, we will assume that D i = z is fixedfor some z . For notational convenience, below we will assume −→ M i = −→ M i,z and ←− M i = ←− M i,z and avoidwriting “given D i = z ” in the expressions. As i will be fixed below, we also write p i − i , −→ P i − , ←− P i − p, −→ P , ←− P respectively, and write p i , −→ P i , ←− P i as p ′ , −→ P ′ , ←− P ′ respectively. For any sets S, T ⊆ V , wedefine p ( S, T ) = P u ∈ S,v ∈ T p ( u, v ) and p ′ ( S, T ) is similarly defined.As −→ M i and ←− M i are now assumed to be integral, −→ M i ∪ ←− M i forms a collection C of disjoint directedcycles that partition A i ∪ B i . Indices of vertices in each cycle C = ( c , . . . , c | C | ) ∈ C are such that c , c , c , . . . , c | C |− ∈ A i and c , c , c , . . . , c | C | ∈ B i . In particular, | C | is even. How the mass movesin at round i can be described as follows: for every C = ( c , . . . , c | C | ) ∈ C , u ∈ V , and 1 ≤ j ≤ | C | p ′ ( u, c j ) = p ( u, c j ) + p ( u, c j − )2where we define c = c | C | . Observe that p ′ ( u, C ) = p ( u, C ). First, we show the entropy neverdecreases. Lemma 7.9. For all u ∈ V , H ( −→ P ′ ( u )) ≥ H ( −→ P ( u )) and H ( ←− P ′ ( u )) ≥ H ( ←− P ( u )) .Proof. We will prove that H ( −→ P ′ ( u )) ≥ H ( −→ P ( u )) for all u . The proof for H ( ←− P ′ ( u )) ≥ H ( ←− P ( u )) issymmetric.Fix u from now. For each cycle C ∈ C , let H C ( −→ P ( u )) = P v ∈ C p ( u, v ) log p ( u,v ) be the sum of theterms in H ( −→ P ( u )) restricted to only vertices in C . Similarly, we let H C ( −→ P ′ ( u )) = P v ∈ C p ′ ( u, v ) log p ′ ( u,v ) .It suffices to show that H C ( −→ P ′ ( u )) ≥ H C ( −→ P ( u )) for each C ∈ C . Fix C from now. Recall the bi-nary entropy function h ( x ) = x log x + (1 − x ) log − x ) where h : [0 , → [0 , Claim 7.10. H C ( −→ P ′ ( u )) + P | C | j =1 p ′ ( u, c j ) · h ( p ( u,c j ) p ( u,c j )+ p ( u,c j − ) ) = H C ( −→ P ( u )) + p ( u, C ) Proof. We have H C ( −→ P ′ ( u )) + | C | X j =1 p ( u, c j ) + p ( u, c j − )2 · h ( p ( u, c j ) p ( u, c j ) + p ( u, c j − ) )= H C ( −→ P ′ ( u )) + | C | X j =1 p ( u, c j )2 log p ( u, c j ) + p ( u, c j − ) p ( u, c j ) + p ( u, c j − )2 log p ( u, c j ) + p ( u, c j − ) p ( u, c j ) ! = | C | X j =1 p ( u, c j )2 log 2 p ( u, c j ) + | C | X j =1 p ( u, c j − )2 log 2 p ( u, c j − )= | C | X j =1 p ( u, c j )(log 1 p ( u, c j ) + 1)= H C ( −→ P ( u )) + p ( u, C )So, it remains to show that P | C | j =1 p ′ ( u, c j ) · h ( p ( u,c j ) p ( u,c j )+ p ( u,c j − ) ) ≤ p ( u, C ). To show this, let Y berandom variable where Pr[ Y = p ( u, c j ) p ( u, c j ) + p ( u, c j − ) ] = p ′ ( u, c j ) /p ′ ( u, C )Observe that E ( h ( Y )) = P | C | j =1 p ′ ( u,c j ) p ′ ( u,C ) · h ( p ( u,c j ) p ( u,c j )+ p ( u,c j − ) ) and E ( Y ) = P | C | j =1 p ′ ( u,c j ) p ′ ( u,C ) · p ( u,c j ) p ( u,c j )+ p ( u,c j − ) =1 / 2. By Jensen’s inequality, we have E ( h ( Y )) ≤ h ( E ( Y )) = h (1 / 2) = 1. So P | C | j =1 p ′ ( u, c j ) · h ( p ( u,c j ) p ( u,c j )+ p ( u,c j − ) ) ≤ p ′ ( u, C ) = p ( u, C ) as desired. This completes the proof of Lemma 7.9.41emma 7.9 already implies that Φ i,z ≥ Φ i − . Next, to show that the potential increase is Ω( n ),we need to exploit the fact that the cut ( A, B ) is a sparse cut. More precisely, let ( A, B ) be a cutof W i − returned by Theorem 7.3 where w ( E W i − ( A, B )) ≤ n/ 100 and | A | , | B | ≥ n/ 10. Recall thatwe choose A i and B i where | A i | = | B i | ≥ n/ A i , B i ) does not cross ( A, B ).Suppose that | A | ≤ | B | . We will show that P u ∈ V H ( −→ P i ( u )) ≥ P u ∈ V H ( −→ P i − ( u )) + Ω( n ). If | A | ≥ | B | , we can show that P u ∈ V H ( ←− P i ( u )) ≥ P u ∈ V H ( ←− P i − ( u )) + Ω( n ) by symmetry. So wewill assume | A | ≤ | B | from now.As | A | ≤ | B | , we can choose ( A i , B i ) such that A ⊆ A i and B i ⊆ B . Observe that each 1 /d -integral edge e ∈ W i − has mass going through it exactly once with amount 1 / d = w ( e ) / 2. As w ( E W i − ( A, B )) ≤ n/ p ( A, B ) ≤ n/ B i ⊆ B , we have p ( A, B i ) ≤ n/ ≤| A | / 20. By averaging argument, there at least | A | / ≥ n/ 20 vertices u ∈ A such that p ( u, B i ) ≤ / p ( A, B i ) > | A | · which is a contradiction). We call these vertices in A interesting vertices. Note that, for each interesting u ∈ A , we have p ( u, A i ) = p ( u, V ) − p ( u, B i ) > / u . Consider the collection C of cycles forming by −→ M i ∪ ←− M i . We saythat a cycle C ∈ C is good (w.r.t. u ) if p ( u, A i ∩ C ) ≥ p ( u, B i ∩ C ). Observe the following: Proposition 7.11. For every interesting vertex u ∈ A , P C : good p ( u, A i ∩ C ) ≥ / .Proof. For each v ∈ A i , there is a unique cycle from C containing v . So P C ∈C p ( u, A i ∩ C ) = p ( u, A i ) > / 10. Assume for contradiction that P C : good p ( u, A i ∩ C ) < / 2. Then, we have p ( u, B i ) ≥ X C : bad p ( u, B i ∩ C ) > X C : bad p ( u, A i ∩ C ) / > (9 / − / / / . But u is interesting, so p ( u, B i ) ≤ / 10, which is a contradiction. Lemma 7.12. For every interesting vertex u ∈ A and good cycle C w.r.t. u , H C ( −→ P ′ ( u )) ≥ H C ( −→ P ( u )) + Ω( p ( u, C )) .Proof. The proof is the extension of Lemma 7.9. Let C = ( c , . . . , c | C | ). Recall that H C ( −→ P ′ ( u )) + P | C | j =1 p ′ ( u, c j ) · h ( p ( u,c j ) p ( u,c j )+ p ( u,c j − ) ) = H C ( −→ P ( u )) + p ( u, C ). It suffices to prove that P | C | j =1 p ′ ( u, c j ) · h ( p ( u,c j ) p ( u,c j )+ p ( u,c j − ) ) ≤ (1 − Ω(1)) p ( u, C ).Let Z be random variable that is similarly defined as the random variable Y from Lemma 7.9.For odd 1 ≤ j ≤ | C | , we setPr[ Z = p ( u, c j ) p ( u, c j ) + p ( u, c j − ) ] = p ′ ( u, c j ) /p ′ ( u, C )and, for even 1 ≤ j ≤ | C | , we setPr[ Z = p ( u, c j − ) p ( u, c j ) + p ( u, c j − ) ] = p ′ ( u, c j ) /p ′ ( u, C ) . Observe that E ( Z ) = P j : odd p ( u, c j ) /p ′ ( u, C ) = p ( u, A i ∩ C ) /p ( u, C ) ≥ / C is good.Recall from Lemma 7.9 that p ( u,C ) P | C | j =1 p ′ ( u, c j ) · h ( p ( u,c j ) p ( u,c j )+ p ( u,c j − ) ) = E ( h ( Y )). However, as42 ( p ( u,c j ) p ( u,c j )+ p ( u,c j − ) ) = h ( p ( u,c j ) p ( u,c j )+ p ( u,c j − ) ) for any j , so we have that E ( h ( Y )) = E ( h ( Z )). By Jensen’sinequality, we have E ( h ( Z )) ≤ h ( E ( Z )) ≤ h (2 / ≤ − Ω(1). Therefore, we conclude that1 p ( u, C ) | C | X j =1 p ′ ( u, c j ) · h ( p ( u, c j ) p ( u, c j ) + p ( u, c j − ) ) ≤ − Ω(1) . This completes the proof of Lemma 7.12.Finally, we summarize the argument above and prove Lemma 7.8. Recall that we assume thatthe cut ( A, B ) on W i − found by Theorem 7.3 is such that | A | ≤ | B | . Then, we have shown thatthere are n/ 20 interesting vertices. For each interesting vertex u ∈ A , combining Proposition 7.11and Lemma 7.12, we have H ( −→ P ′ ( u )) ≥ H ( −→ P ( u )) + X C : good Ω( p ( u, C )) = H ( −→ P ( u )) + Ω(1) . As H ( −→ P ′ ( u )) ≥ H ( −→ P ( u )) and H ( ←− P ′ ( u )) ≥ H ( ←− P ( u )) for all u ∈ V by Lemma 7.9. We haveΦ i,z ≥ Φ i − + n · Ω(1).If | A | ≥ | B | , the proof is symmetric. We choose ( A i , B i ) such that A i ⊆ A and so p ( A i , B ) ≤ n/ ≤ | B | / 20. We say that a vertex u ∈ B is interesting if p ( A i , u ) ≤ / 10. There must beat least | B | / ≥ n/ 20 interesting vertices using the same agrument. We say that a cycle C ∈ C is good (w.r.t. u ) if p ( B i ∩ C, u ) ≥ p ( A i ∩ C, u ) and can prove that P C : good p ( B i ∩ C, u ) ≥ / u . We also have H C ( ←− P ′ ( u )) ≥ H C ( ←− P ( u )) + Ω( p ( C, u )). All these imply thatΦ i,z ≥ Φ i − + n · Ω(1) as well. This completes the proof of Lemma 7.8, which in turn provesLemma 7.4. In this section, we sketch the proof of Theorem 7.3. First, we state the version of Theorem 7.3 foronly unweighted graphs. Theorem 7.13. There is a deterministic algorithm that, given a directed n -vertex unweighted graph G = ( V, E ) and maximum weighted degree O (log n ) , returns one of the following: • either a cut ( A, B ) in G such that | A | , | B | ≥ n/ and | E G ( A, B ) | ≤ n/ ; or • a subset S ⊂ V of at least n/ vertices and Ψ( G [ S ]) ≥ /γ .The running time of the algorithm is O ( nγ ) where γ = n o (1) . Theorem 7.3 follows from Theorem 7.13. Given Theorem 7.13 above, the proof of The-orem 7.3 is quite straightforward. There are two steps: (1) making the graph unweighted, (2)reducing the maximum degree.For the first step, as the input graph G of Theorem 7.3 is 1 /d -integral, we can scale up all1 /d -integral edges to unweighted edges. Let G ′ denote the resulting graph. As the weighted min-imum and maximum in-degree/out-degree in G is 1 and O (log n ) respectively, G ′ has O ( nd log n )unweighted edges and has minimum and maximum in-degree/out-degree d and O ( d log n ) respec-tively.For the second step, we apply the standard “degree reduction” technique. (See Section 5.2of [CGL + G ′ and obtain G ′′ . The idea to obtain G ′′ is to replace each vertex in G by a43onstant-degree expander with O ( d log n ) vertices. It is easy to show that, when we compute callTheorem 7.13 on G ′′ , we can obtain a corresponding cut in G ′ as an output of Theorem 7.3 withthe same balanced and sparsity in linear time. This argument is formally shown in Lemma 5.4of [CGL + Proof of Theorem 7.13. Theorem 7.13 is exactly the directed-graph version of Theorem 1.5from [CGL + + 20] only needs the techniques from Section3and 4 in [CGL + + 20] in Sections 3 and 4. The modification is as follows:• Section 3 of [CGL + 20] describes algorithms that, given a set of vertices A , . . . A k and B , . . . B k where | A i | = | B i | , either compute an embedding of matchings between A i and B i for all i with some small number of fake edges, or return a balanced sparse cut. Their firstalgorithm is based on Even-Shiloach tree and their second algorithm is based on push-relabelflow algorithm. As both algorithms readily work on directed graphs, the statement of theirresult in Section 3 can be generalized to directed graphs without technical modification.• Section 4 of [CGL + 20] describes a recursive algorithm for the undirected version of Theo-rem 7.13. We need three simple modifications. First, they employ the undirected expanderpruning from [SW19] to identify the large vertex set S where Ψ( G [ S ]) ≥ /n o (1) . We canreplace this subroutine in a black-box manner with our directed expander pruning from Theo-rem 6.1 (when all the edge deletions are even given in one batch). As the quality and runningtime of Theorem 6.1 directed graphs is only n o (1) factor worse than the algorithm of [SW19]for undirected, this only affects our final guarantee in Theorem 7.13 by n o (1) factor. Thesecond modification is the following. The algorithm in Section 4 of [CGL + 20] use a simpleobservation that a union of sparse cuts is also sparse. While this is true for undirected graphs,this is not true in directed graphs because a sparse cut can be sparse either because of fewout-going edges or because of few in-coming edges. Fortunately, we can show that there is alarge subset of the union whose sparsity is at most twice. This is formally stated and provedbelow in Proposition 7.14. Lastly, the recursive algorithm in Section 4 of [CGL + 20] needs thecut-matching game of Khandekar et. al [KKOV07] which works for only undirected graphs(i.e. the matching player inserts undirected matchings). But we have generalized the analysisof this cut-matching game to work even when the matching players inserts directed match-ings in Section 7.2. With these three technical modification, we can prove Theorem 7.13 byfollowing the same steps of the algorithm shown in Section 4 of [CGL + Proposition 7.14. Let G , G , . . . , G k be a sequence of weighted directed graphs obtained by thefollowing process. For each i , there is a set S i ⊂ V ( G i ) such that G i +1 = G i [ V ( G i ) \ S i ] , | S i | ≤| V ( G i | / , and Ψ G i ( S i ) ≤ ψ . Suppose | S i S i | ≤ | V ( G ) | / . Then, there is a set S ⊆ S i S i where | S | ≥ | S i S i | / such that Ψ G ( S ) ≤ ψ .Proof. For each i , we say that S i is out-sparse if w ( E ( S i , V ( G i ) \ S i ))) ≤ ψ | S i | and S i is in-sparse if w ( E ( V ( G i ) \ S i ) , S i )) ≤ ψ | S i | . Let S out and S in be the union of out-sparse sets S i and the unionof in-sparse sets S i respectively. We assume w.l.o.g. that | S out | ≥ | S in | , otherwise the proof issymmetric. Note that | S out | ≥ | S i S i | / S out ⊆ S i S i .First, we claim that w (cid:0) E (cid:0) S out , S in (cid:1)(cid:1) ≤ ψ ( | S out | + | S in | ). To see this, suppose that S is44ut-sparse. Then, we have w (cid:16) E (cid:16) S out , S in (cid:17)(cid:17) ≤ w (cid:16) E (cid:16) S , S in (cid:17)(cid:17) + w (cid:16) E (cid:16) S out \ S , S in (cid:17)(cid:17) ≤ w ( E ( S , V ( G ) \ S )) + w (cid:16) E (cid:16) S out \ S , S in (cid:17)(cid:17) ≤ ψ | S | + ψ ( | S out \ S | + | S in | )where the last inequality is because S is out-sparse and because we can continue the same argumenton S and w (cid:0) E (cid:0) S out \ S , S in (cid:1)(cid:1) . If S is in-sparse the argument is the symmetric. Next, observethat w (cid:0) E (cid:0) S out , V ( G ) \ ( S out ∪ S in ) (cid:1)(cid:1) ≤ ψ | S out | . To see this, we write S out = S j ∪ · · · ∪ S j k ′ where, for each i , S j i is an out-sparse cut and j i < j i +1 . Then, we have w (cid:16) E (cid:16) S out , V ( G ) \ ( S out ∪ S in ) (cid:17)(cid:17) ≤ X i w ( E ( S j i , V ( G j i ) \ S j i )) ≤ ψ X i | S j i | = ψ | S out | . Therefore, we have w (cid:16) E (cid:16) S out , V ( G ) \ S out (cid:17)(cid:17) ≤ w (cid:16) E (cid:16) S out , V ( G ) \ ( S out ∪ S in ) (cid:17)(cid:17) + w (cid:16) E ( S out , S in ) (cid:17) ≤ ψ | S out | + ψ ( | S out | + | S in | ) ≤ ψ | S out | . As | S out | ≤ | S i S i | ≤ | V ( G ) | / 2, so Ψ G ( S out ) ≤ ψ . In this section, we show how our decremental SCC Algorithm (Algorithm 1) responds to queries.By Proposition 4.1, we only need to show how to answer SCC path-queries in G ∗ [ V ∗ \ ˆ S ]. Sincethe algorithm explicitly maintains the connected components of G ∗ [ V ∗ \ ˆ S ] (these are precisely thesets in C ), the query can easily determine in O (1) time whether two vertices belong to the sameSCC in G ∗ [ V ∗ \ ˆ S ]. All that remains is to show that if u and v belong to the same SCC G in G ∗ [ V ∗ \ ˆ S ], then the agorithm can efficiently return a simple path from u to v in G . (A path in theother direction can be returned using an analogous argument.)Since G = ( V, E ) is an SCC in G ∗ [ V ∗ \ ˆ S ], we know that the algorithm makes some call SCC-Helper ( G ). Let W be the large witness maintained in line 5. Since the algorithm alsomaintains data structure Forest-From-Witness ( G, W, φ ∗ ) (Line 7), we can in O (log( n )) timefind vertices w , w ′ in W such that u is contained in an in-directed tree T rooted at w and v iscontained in an out-directed tree T ′ rooted at w ′ (see guarantees of Theorem 4.4). Finally, we canfind use Path-Inside-Expander ( W ), maintained in Line 6, to find a path P W from w to w ′ in E ( W ), where | P W | = n o (1) . Note that the path P W uses the edges of witness W , NOT the edgesof G . The total time spent up to this point is only n o (1) .For convenience, we relabel vertices a bit. Let u = v . Let v , . . . , v k − be the edges in W on P W ; so v = w and v k − = w ′ . Let v k = v . Since | P W | = n o (1) , we also have k = n o (1) .We first consider a naive procedure query, and show that while it successfully returns a path, itis not efficient enough. We can use T and T ′ to find paths P u and P v in G , which are respectivelyfrom u to w and from w ′ to v . Now, let P be the embedding of W into G , which is explicitlymaintained by the call to Robust-Witness in Line 5 of the algorithm. We can use P to convertthe path P W = ( v , ..., v k − ) into a path in G . Each edge ( v i , v i +1 ) ∈ P W ⊆ E ( W ) corresponds tosome path v v in P , so concatenating these yields a path P G ⊂ E ( G ) from v to v k − . We45hen return the u − v path P = P u ◦ P G ◦ P v . Note that P u and P v , as well as the paths in P ,can be as long b O (1 /φ ∗ ) = b O ( n / ), so P can be quite long. At first glance this does not seem tobe a problem, because it is not hard to check that the time spent to find P is O ( | P | ). The issueis that the path P might not be simple. Say, for example, that the first edge of P u is ( u, z ) andthe before-last edge of P v is ( y, z ). Then almost all of P consists of a long cycle from z to z . Ofcourse, we can always extract a simple path P ′ ⊆ P , but in the example above P ′ will be the path( u, z ) ◦ ( z, v ). We thus spent as much as b O ( n / ) returning a path of length 2.In order to achieve almost path-length query time, we thus need a more clever query procedure.We start with some notation. Let P and P k − be the paths described above from v to v andfrom v k − to v k ; these paths are both contained in acyclic trees, so they are simple. Similarly, for2 ≤ i ≤ k − 2, let P i be the path in P from v i to v i +1 ; these are all simple because they correspondto paths in P , which form the path decomposition of a flow (see Remark B.1). In this terminology,the naive query procedure is to look at all of the edges in all of the P i , and concatenate them. Wenow show a different method that allows us to effectively throw away long cycles without havingto look at all the edges on the cycle. Minor additions to the data structures used by Algorithm 1 Recall that the paths P , . . . , P k all come from Forest-From-Witness and Robust-Witness . Our query procedurerequires these two algorithms to construct slightly more powerful data structures. Both the ad-ditions are light-weight, and will only increase the total update time of these algorithms by a O (log( n )) factor, which is subsumed in the b O -notation.Recall that Robust-Witness (Theorem 4.3) explicitly constructs all the flow paths in embed-ding P . These paths can be stored as doubly linked lists. For the query procedure to work, we alsohave Robust-Witness build a simple data structure on each path P : build a balanced binarysearch tree on the vertices in P , and let each node in the tree have a pointer to the correspondingnode in list P . This can clearly be done in O ( | P | log( n )) time. Note that we do not need to main-tain these data structures dynamically, because within each phase of Robust-Witness , individualpaths in the embedding never change; the embedding changes only via deleting entire paths. Everytime Robust-Witness enter a new phase, it computes a new embedding from scratch, at whichpoint we can again construct our data structure on each path P with only O (log( n )) overhead.Recall that Forest-From-Witness (Theorem 4.4) maintains a forest of trees. Firstly, for eachvertex x ∈ V , we maintain a pointer to the corresponding node in the tree that contains x , andvice versa: these pointers never change, only incur O (1) overhead. We also maintain a top tree oneach tree in the forest: see e.g. the paper by Alstrup et al. for a nice overview [AHdLT05]. Thesetrees can perform link and cut operations in O (log( n )) time, maintaining them incurs at most a O (log( n )) multiplicated overhead in the update time. (In fact, the proof of Theorem 4.4 in SectionD already uses link-cut trees, so in our case using top-trees incurs no additional overhead.) Thekey operation we need from top trees is that given any vertices x, y ∈ V , we can Given any x, y ∈ V , determine whether they are in the same tree. This is done by using the pointers to therespective nodes of x and y in the forest and checking if they have the same root. check if y is on the path between x and the root. Letting r be the root vertex, this is done by checking ifdist( x, y ) + dist( y, r ) = dist( x, r ); see Lemma 5 of [AHdLT05] for details on how the top-trees canbe used to return distances in the tree.The above data structures lead to the following claim Claim 8.1. Let T and T k be the trees maintained for v and v k by Forest-From-Witness , saythat paths P , . . . , P k − are stored as doubly linked lists, and say that we also have the augmented ata structures described above. Then, given any vertex x ∈ V and any index i with ≤ i ≤ k , itis possible to answer the following query Vertex-In-Path ( x, i ) in O (log( n )) time:1. If x / ∈ P i return False2. If x ∈ P i , returns True and also returns a pointer to the node corresponding to x in the path P i : for P , . . . , P k − this means the node in the doubly linked list P i , and for P , P k this meansthe node in the corresponding tree T , T k .Proof. The claim follows directly from the augmented data structures. If 2 ≤ i ≤ k − 1, then thebinary search tree on P i allows us to search for x in O (log( | P i | )) = O (log( n )) time; if the node isfound, then we follow the pointer from the binary search tree to the path.Say i = 1; the case i = k is analogous. As mentioned above, the top tree on T allows us tocheck if x is in T , and if yes, to determine is x is on path P by checking if it is ancestor for v .The pointer to the node x in the tree comes from the fact that we store pointers to and from everyvertex in G and the corresponding nodes in the forest. The Algorithm: Say that the first edge on P is ( v , z ). To avoid exploring a long cycle through z (see example above), before continuing from z the algorithm checks if z is on one of the otherpaths P i . If not, it can safely continue. If yes, let P j be the path that contains z with maximum j .Then, instead of continuing the search from P , the algorithm continues from P j . This guaranteesthat there can be no cycle through j , because P j is simple, and no later path contains z .The pseudocode in Algorithm 5 formalizes the intuition above. Algorithm 5: Finding a path from v to v k . Recall the paths P , . . . , P k defined above. Initialize CurVertex ← v Initialize CurPath ← Initialize CurPointer to point to v in P // always points to CurVertex in P CurPath Initialize P ∗ ← ∅ // P ∗ is returned at the end, and will always be simple Repeat Until CurVertex = v k6 Do Vertex-In-Path (CurVertex , i ) for all i > CurPath if none of the Vertex-In-Path return True then // no cycle through CurVertex Let z be the vertex after CurVertex on path P CurPath (can find z by followingCurPointer and then taking the next edge in the path/tree) Add edge (CurVertex , z ) to P ∗ CurVertex ← z ; adjust CurPointer to point to z else Let i be the largest index such that Vertex-In-Path (CurVertex , i ) returns True CurPath ← i Set CurPointer to be pointer returned by Vertex-In-Path Return P ∗ Analysis Firstly, note that when we execute the main loop in Line 5, we cannot land in the elsestatement twice in a row, since the else statement always switches to the highest-indexed path thatcontains CurVertex. So for every two iterations of the loop, we execute the if statement at leastonce, and hence add an edge to P ∗ . 47onsider the (possibly non-simple) path P = P ◦ P ◦ . . . ◦ P k . It is easy to see that in everyiteration of the main loop, the algorithm jumps forward in P : it either goes forward one vertex insome path P i (the if statement), or it jumps from CurVertex to another copy of CurVertex on alater path (the else statement). In other words, the vertices of P ∗ for a subsequence of the verticesin P . Thus, the algorithm eventually reaches v k and terminates.We now argue that the returned path P ∗ is simple. Consider any vertex x ∈ P ∗ and considerthe first time we added x to P ∗ ; say that at this time CurPath = i . We argue that x will never bereached again. The first case is that Vertex-In-Path ( x, j ) returns False for all j > i . In this case P ∗ will never again reach x , because as argued in the above paragraph, P ∗ only moves forwardalong P ; it cannot reach x a second time in P i because each P i is simple and x is not contained inany of the later P j . The second case is that x ∈ P j for some j ≥ i . Let j be the largest index suchthat j ≥ i . Then the else-statement of the main loop switches to P j without adding any verticesto P ∗ and in the next iteration we are the first case, so there is no cycle through x .For the running time analysis, note that each iteration of the main loop executes k = n o (1) instances of Vertex-In-Path , each of which takes O (log( n )) time, so the running time is b O ( P ∗ . We thus have a running time of b O ( P ∗ ), as desired. In this section, we prove one of our main results: Theorem 1.2. Recall that our decrementalSSR/SCC result combines our new expander-based framework with earlier techniques for decre-mental SCC in [Lac11, CHI + Proposition 9.1. Let G = ( V, E, w ) be a weighted decremental graph, and s ∈ V a fixed source. Let A be a data structure given some integer d > , that processes edge deletions to E and after everyedge deletion ensures that G is strongly-connected and has diameter at most d and supportspath queries between any two vertices in G that returns a path of length b O ( d ) in almost-path-lengthquery time and runs in total update time T ( m, n, d ) (here we assume T ( m , n , d )+ T ( m , n , d ) ≤ T ( m, n, d ) for all choices m, n, d and m , m , n , n , d , d such that m = m + m , n = n + n and d = d + d ). At any time the data structure may perform the following operation: it findsand outputs a b O (1 /d ) -sparse cut ( L, S, R ) where | L | ≤ | R | and replaces G with G [ R ] ; here we onlyrequire the algorithm to output L and S explicitly. (This sparse-cut operation is not an adversarialupdate, but is rather something the data structure can do of its own accord at ay time.)Then, there exists a deterministic data structure B that can report (1 + ǫ ) -approximate distanceestimates and corresponding paths from s to any vertex v ∈ V in the graph G in almost-path-lengthquery time and has total update time b O (( T ( m, n, δ ) + n /δ + n δ + mn / ) log W/ǫ ) for any choice of δ, ǫ > . (Note that the data structure can cause V ( G ) to shrink over time via sparse-cut operations,so it only has to answer queries for vertices u, v in the current graph.) It is straight-forward to obtain Theorem 1.2 from the proposition, and Theorem 4.3. Proof of Theorem 1.2. We now show how to implement the data structure A required by the setupof Proposition 9.1, with T ( m, n, δ ) = b O ( mδ ) as follows. Given the graph G , we can invoke thealgorithm described in Theorem 4.3 with parameter φ = ˆΘ(1 /δ ), such that the algorithm maintainsa φ -short-witness W that restarts up to b O (1 /φ ) = b O ( δ ) times. Whenever W starts a new phase ,48e use the data structures from Theorem 4.4 and Theorem 4.5 on G and W until the phase ends.We forward the sparse cuts ( L, S, R ) found in the algorithm from Theorem 4.4 and Theorem 4.3and update G accordingly. Thus after the algorithm from Theorem 4.3 terminates, the graph G contains only a constant fraction of the vertices that the algorithm in Theorem 4.3 was initializedupon. We then repeat the above construction and note that after at most O (log n ) times, the graph G is the empty graph.We note that to obtain a path between any two vertices in the current graph G , we can querythe data structures from Theorem 4.4 and Theorem 4.5 as we described in Section 4 to obtain sucha path of length b O ( δ ) in almost-path-length time. We further observe that if we set φ to δn o (1) ,for a large enough subpolynomial factor n o (1) , then we can ensure that vertices in G \ W are atall times at most δ/ W by Theorem 4.4, have that any two vertices in W are at distance at most δ/ G by Theorem 4.5 and Theorem 4.3, and again,that there exists a path to every vertex in G \ W to a vertex in W of length at most δ/ 3. But thisimplies that any two vertices in G are at all times at distance at most δ and therefore the diameterof G is upper bounded by δ , as required.The total update time of the data structure A is at most b O ( m/φ ) = b O ( mδ ) by adding therunning time of Theorem 4.3 with the running time induced by the algorithms in Theorem 4.4 andTheorem 4.5 which are restarted in b O ( δ ) phases.We thus derive an algorithm B as specified in Proposition 9.1, where we use the above datastructure A and where we set δ = n / which gives total update time b O (( T ( m, n, δ ) + n /δ + n δ + mn / ) log W/ǫ ) = n / o (1) log W/ǫ. The rest of this section is dedicated to prove Proposition 9.1. We therefore introduce necessarynotation in the next subsection, then introduce the abstraction of an approximate topological orderwhich we reduce the problem to and finally prove that an approximate topological order can bemaintained efficiently. Given two partitions A and B of a universe V , we say A is a refinement of B if and only if for everyset A ∈ A there exists a set B ∈ B such that A ⊆ B .Throughout the section, we let u G v denote that u reaches v in G , and u ⇄ G v that u and v are strongly-connected, i.e. that u reaches v and v reaches u . We call the tuple ( V , τ ) the generalizedtopological order of G , if V is the set of SCCs in G and τ : V → [1 , n ] is a function that maps eachSCC X in V to a number τ ( X ) such that no other Y ∈ V has τ ( Y ) ∈ [ τ ( X ) , τ ( X ) + | X | − τ establishes a one-to-one correspondence between SCCs in X and intervals of size | X | in [1 , n ]. Ina decremental graph G , we have that a generalized topological order has the property that each V is a refinement of its earlier versions, since SCCs decompose over time.We say that ( V , τ ) has the nesting property, if for any set X ∈ V and a set Y ⊆ X that was in V at an earlier stage, that τ ( X ) ∈ [ τ ( Y ) , τ ( Y ) + | Y | − | X | ]. Thus, the interval [ τ ( X ) , τ ( X ) + | X | − X is contained in the interval [ τ ( Y ) , τ ( Y ) + | Y | − 1] associated with Y .Given a partition V of V , we let G/ V denote the multi-graph of G after contracting verticesthat are in the same set X ∈ V , where we keep all edges, i.e. also self-loops and parallel edges.Abusing notation slightly, we refer to V as the node set of the graph G/ V .49or convenience, we define T ( X, Y, ( V , τ )) to be the function that takes as parameters two SCCs X, Y ∈ V and a generalized topological order ( V , τ ) of G , and define the function T ( X, Y, ( V , τ )) = ( τ ( Y ) − ( τ ( X ) + | X | − 1) if τ ( X ) < τ ( Y ) T ( Y, X ) otherwiseFor any path P in G , we let T ( P, ( V , τ )) denote the total topological distance traversed by P inthe topological order ( V , τ ). Formally, T ( P, ( V , τ )) = X ( X,Y ) ∈ P/ V T ( X, Y, ( V , τ )) . We now introduce the concept of an approximate topological order which we define similar to[BGW20] and which we implement similar to [GW20]. The main idea of an approximate topologicalorder is as follow: consider the generalized topological order ( V , τ ) of a graph G . Then G/ V isa directed acyclic graph by definition. But this implies that for any (shortest) s -to- t path π s,t in G we have that every edge ( X, Y ) on π s,t / V in G/ V has τ ( X ) < τ ( Y ). Since further τ maps tonumbers between 1 and n , we have thus that summing along the topological difference of the edgesof π s,t / V , we that T ( π s,t , ( V , τ )) is at most n .Next, let us assume that the sum of diameters of all SCCs in V is at most ǫδ , then for anyshortest path π s,t , we can upper bound the difference in weight between π s,t / G path in G/ V asopposed to π s,t in G by an additive term of ǫδ . So, if π s,t is of weight at least δ , the additive termcan be subsumed in a multiplicative error of (1 ± ǫ ).Now, the gist of this set-up is that given this upper bound on T ( π s,t , ( V , τ )), we can implementa fast SSSP data structure as follows. We know that on a path of length δ in G/ V there are at most δ/ i edges that have topological order difference more than 2 i n/δ by the pigeonhole principle forany i . But this implies that adding an additive error of ǫ i on each such edge would only amountto an (1 + ǫ ) multiplicative error of a shortest path of length δ . But this allowance for a significantadditive error can be exploited to speed-up the SSSP data structure significantly because it allowsfor vertices to consider the neighbors that are close in topological order difference more closelywhile being more lenient when passing updates to vertices that are far in terms of topological orderdifference.Before we state a data structure from [BGW20] that exploits this very efficiently, let us nowstate more formally the construct of an approximate topological order. Here, we point out one lastissue: we cannot assume that SCCs in G have small diameter in general. Therefore we maintain thegeneralized topological order on a graph G ′ initialized to G where we, additionally to adversarialedge updates to G , also take vertex separators S such that edges incident to S are deleted from G ′ .This ensures that all SCCs in G ′ have small diameter. Relating back to G (where no separator wasdeleted) we have that T ( π s,t , ( V , τ )) might be increased by this operation since some edge ( X, Y )on π s,t / V with X or Y containing a separator vertex S , such that ( X, Y ) might now go ”backwards”in the topological order, i.e. have τ ( X ) > τ ( Y ). This increases T ( π s,t , ( V , τ )) by up to 2 n − X, Y ) all the way back in the topological orderand then forward again. However, by choosing small separators, we can still bound T ( P, ( V , τ )) bya non-trivial upper bound.Without further due, let us give the formal definition of an approximate topological order.50 efinition 9.2. Given a decremental weighted digraph G = ( V, E, w ) and parameter η ≤ n and ν ≤ W , we say a dynamic tuple ( V , τ ) where V partitions V , and τ : V → [1 , n ], is an AT O ( G, η, ν )if at each stage1. V forms a refinement of all earlier versions of V and τ is a nesting function, i.e. τ initiallyassigns each set in X in the initial version of V a number τ ( X ), such that no other set Y in V has τ ( Y ) in the interval [ τ ( X ) , τ ( X ) + | X | − Y ∈ V is split at some stageinto disjoint subsets Y , Y , .., Y l , then we let τ ( Y ) = τ ( Y ) and τ ( Y i +1 ) = τ ( Y i ) + | Y i | . Wethen return a pointer to each new subset Y i such that all vertices in Y i can be accessed intime O ( | Y i | ). The value τ ( X ) for each X ∈ V can be read in constant time.2. each set X in V has weak diameter diam ( X, G ) ≤ | X | ηνn , and3. At each stage, for any vertices s, t ∈ V , the shortest-path π s,t in G satisfies T ( π s,t , ( V , τ )) = b O (cid:16) n η + n · dist G ( s,t ) ν (cid:17) .Here, we captured in Property 1, that the vertex sets in V decompose over time, that τ is nesting and that all sets are easily accessible. In Property 2, we capture that the sum of diametersof the vertex sets in V is small. It is not hard to see that by summing the upper bound on thediameter of all such sets X in V , we get that the sum of diameters is bounded by ην . Finally, wegive an upper bound for the topological order difference for any shortest-path in G .The main result of the next section, shows that we can maintain an AT O using data structure A from Proposition 9.1. Lemma 9.3. Given a decremental weighted digraph G = ( V, E, w ) , parameters η ≤ n, ν ≤ W ,and a data structure A as described in Proposition 9.1 that can for each SCC X in V at any pointreturn a path between any two vertices u, v ∈ X of length b O ( | X | ηνn ) in near-linear time. Then, wecan deterministically maintain a AT O ( G, η, ν ) in total update time b O ( T ( m, n, η ) + mn / ) . From [BGW20], we now obtain the following theorem. Note that we slightly modified thetheorem from [BGW20] to adapt it to the simplified definition of an AT O that we use for thispaper. However, the adaption is obtained straight-forwardly and we refer the reader to [BGW20]to verify. Theorem 9.4 (see [BGW20], Theorem 5.1) . Given G = ( V, E, w ) , a decremental weighted digraph,a source r ∈ V , an approximation parameter ǫ > , and access to ( V , τ ) an AT O ( G, η, ν ) .Then, there exists a deterministic data structure that maintains a distance estimate g dist ( r, v ) for every vertex v ∈ V such that at each stage of G , dist G ( r, v ) ≤ g dist ( r, v ) and if dist G ( r, v ) ∈ [ ην/ǫ, ην/ǫ ) , then g dist ( r, v ) ≤ (1 + ǫ ) dist G ( r, v ) and the algorithm can for each such vertex v , report a path of length (1 + ǫ ) dist G ( r, v ) in the graph G/ V in almost-path-length time. The total time required by this structure is b O n ηǫ + · n ηǫ ! . We can now prove Proposition 9.1. Proof of Proposition 9.1. For every 0 ≤ i ≤ lg W , where W is the aspect ration of G = ( V, E, w ),we maintain at level i , an AT O ( G, δ, i ) using Lemma 9.3, and then running Theorem 9.4 on G AT O ( G, δ, i ) from our source vertex s to depth δ · i . Thus, each such data structuremaintains for every vertex v at distance [ δ · i /ǫ ′ , δ · i +1 /ǫ ′ ) from s an (1 + ǫ ′ )-approximate distanceestimate. We can therefore find for every vertex v at distances larger than δ/ǫ ′ from s a distanceestimate in some of these data structures that gives the right approximation, and since all datastructures overestimate the distance, we can find the right distance estimate by comparing alldistance estimates g dist ( s, v ). Finally, we can maintain a simple ES-tree in time O ( mδ/ǫ ′ ) to obtainexact distances from s to every vertex at distance at most δ .It is not hard to verify that the total update time of all data structures is X ≤ i ≤ lg W b O n δǫ ′ + · n δǫ ′ ! + b O ( T ( m, n, δ ) + mn / ) ! = b O (( T ( m, n, δ ) + n /δ + n δ + mn / ) log W/ǫ ′ ) . for ǫ ′ to be set ǫ ′ = ǫ/n o (1) which is again subsumed in the b O -notation.To answer path queries for a s -to- v path π s,v , we query the corresponding shortest path datastructure where we found a (1 + ǫ ′ )-approximation. This gives us the path g π s,v in G/ V for some AT O ( V , τ ). We then identify for every vertex x on g π s,v the corresponding SCC in V and the twoendpoints in G of the incident edges on g π s,v . We can then query for a path between these two verticesin the AT O data structure. Summing over all exposed paths, by Lemma 9.3, we can extend thepath g π s,v to a path in G of length (1 + ǫ ′ ) dist G ( s, v ) + b O ( ην ). But we have that dist G ( s, v ) ≥ δ/ǫ ′ .Thus, setting ǫ ′ to ǫ/ b O ( ην ), we obtain a path oflength (1 + ǫ ) dist G ( s, v ). Since each piece on the path can be obtained in almost-path-length time,we can also construct the extension of path g π s,v to a path in G in almost-path-length time. Thiscompletes the proof. Finally, let us prove the main ingredient to achieve our result. Lemma 9.3. Given a decremental weighted digraph G = ( V, E, w ) , parameters η ≤ n, ν ≤ W ,and a data structure A as described in Proposition 9.1 that can for each SCC X in V at any pointreturn a path between any two vertices u, v ∈ X of length b O ( | X | ηνn ) in near-linear time. Then, wecan deterministically maintain a AT O ( G, η, ν ) in total update time b O ( T ( m, n, η ) + mn / ) .Proof. We start the proof by partitioning the edge set E of the initial graph G into edge set E heavy and E light . We assign every edge e ∈ E to E heavy if its weight w ( e ) is larger than ν , and to E light if w ( e ) ≤ ν .We now describe our algorithm where we focus on the graph G where the edge set E heavy isremoved. As we will see later, there can only be few edges from E heavy on any shortest path. Letus start the proof by giving an overview and then a precise implementation. We finally analyzecorrectness and running time. Algorithm. Our goal is subsequently to maintain an incremental set ˆ S ⊆ V such that every SCC X in G ′ = G \ E ( ˆ S ) \ E heavy has unweighted diameter at most | X | ηn . Since each edge weight is atmost ν this will imply that every SCC X in the weighted version of G ′ has diameter at most | X | ηνn .We then maintain ( V , τ ) as the generalized topological order of G ′ using the data structuredescribed in the theorem below which is a straight-forward extension of Theorem 1.1 using internallythe algorithm by Tarjan [Tar72] as described in [GW20, BGW20].52 heorem 9.5. Given a decremental digraph G = ( V, E ) , there exists a deterministic algorithm thatcan maintain the SCCs V of G . The algorithm can further be extended to maintain the generalizedtopological order ( V , τ ) of G where τ has the nesting property. The algorithm is deterministic andruns in total update time mn / o (1) . To maintain G ′ , we initialize a data structure A on every SCC X in the initial set V on thegraph G ′ [ X ] with parameter d = | X | η n . Then, whenever such a data structure A that currentlyoperates on some graph G ′ [ Y ], announces a sparse cut ( L, S, R ) and sets its graph to G ′ [ R ], we add S to ˆ S and then initialize a new data structure A ′ on G ′ [ L ] with parameter d = | L | η n . Further, if thedata structure A was initialized on a graph with vertex set at least twice as large as R , we delete A , and initialize a new data structure A ′′ on G ′ [ R ] with d = | R | η n . This completes the descriptionof the algorithm. Correctness of the Algorithm. We prove each property of the theorem individually:• Property 1: It is straight-forward to see that since ( V , τ ) is the generalized topological orderof G ′ ⊆ G and since it is maintained to satisfy the nesting property, that Property 1 followsimmediately.• Property 2: Observe that V is the set of SCCs in G ′ . Further, observe that we maintainthe data structures A , A , . . . such that vertex set of all graphs that they run on spans allvertices in V \ S . For the vertices in S we have that each s ∈ ˆ S forms a trivial SCC andtherefore certainly satisfies the constraint. For each set X that some data structure A runsupon, we have that the unweighted diameter is at most the d that A was initialized with.Observe that we delete data structures if the size of the initial vertex set Y is decreased byfactor 2. Thus, we have that the data structure A was initialized for some d = | Y | η n ≤ | X | ηn .Since the largest edge weight in G ′ is ν , we thus have that for each SCC X in V , we have diam ( X, G ′ ) ≤ | X | ηνn . Adding edges in E ( ˆ S ) and E heavy can further only decrease the weakdiameter and therefore we finally obtain that, diam ( X, G ) ≤ | X | ηνn . • Property 3: In order to establish the last property, let us partition the set ˆ S into sets S , S , . . . , S lg n where a vertex s is in S i if it joined ˆ S after a data structure A announcedit that was initialized on a graph G ′ [ Y ] where Y was of size [ n/ i +1 , n/ i ). Since we deletedata structures after their initial vertex set has halved in size, we have that are such datastructure that added vertices to a set S i ran with d ≥ ( n/ i +1 ) η n = nη i +2 . Since each such setof vertices S that was added to S i is b O (1 /d )-sparse and we then only compute sparse cuts onthe induced subgraphs of the cut, we further have that there are at most b O ( n/d ) = b O (2 i /η )vertices in S i at the end of the algorithm. Further, we observe that every edge ( u, v ) thatwas contained in the subgraph G ′ [ Y ] when A was initialized has both endpoints in Y andtherefore by property 1, we have | τ ( u ) − τ ( v ) | < | Y | ≤ n/ i − .Now, let us fix any shortest path π s,t in G (in the current version). Instead of analyzing T ( π s,t , ( V , τ )), let us analyze T ′ ( π s,t , ( V , τ )) def = X ( u,v ) ∈ π s,t max { , τ ( u ) − τ ( v ) } . T ( π s,t , ( V , τ )) ≤ T ′ ( π s,t , ( V , τ )) + n .For edges on π s,t in E heavy , we observe that each such edge ( u, v ) can contribute to T ′ ( π s,t , ( V , τ ))at most n since τ ( u ) − τ ( v ) ≤ n (trivially since both numbers are taken from the interval[1 , n ]). Further, since each such edge adds weight at least ν to the shortest path, there are atmost dist ( s,t ) ν such edges. Thus, the total contribution by all these edges is at most n dist ( s,t ) ν .For the edges on π s,t in E light , we observe that each edge ( u, v ) that contributes to T ′ ( π s,t , ( V , τ ))is not in G ′ since ( V , τ ) is a generalized topological order of G ′ and therefore directed ”for-wards” (recall the definition in section 2). Thus, each such edge is in E ( ˆ S ) and thereforeincident to some vertex s in some S i . But then it adds at most n/ i − to T ′ ( π s,t , ( V , τ )) byour previous discussion. Since a path only visits each vertex once, and by our bound on thesize of S i , we can now bound the total contribution by T ′ ( π s,t , ( V , τ )) ≤ X i | S i | n/ i − = b O ( n /η + n dist ( s, t ) ν ) Bounding the Running Time. Observe that for any vertex x ∈ V , that between any two timesthat it part of a graph G ′ [ Y ] that a data structure A is invoked upon and of graph G ′ [ X ] ⊆ G ′ [ Y ],the set X is of at most half the size of Y . This follows by the definition of data structure A whichwhenever a sparse cut ( L, S, R ) is output, continues on the graph G ′ [ R ] where R is larger than L while no data structure is thereafter initialized on a graph containing any vertex in S .But if the SCC that some vertex x is contained in halves in size every time between two datastructures A are initialized upon x , then we have that x participates in at most lg n data structuresover the entire course of the algorithm. Since each edge ( x, y ) or ( y, x ) for any y ∈ V is only presentin the induced graph containing x , we have that no data structure that is not initialized on a graphwith vertex set contain x has ( x, y ) or ( y, x ) in its graph. Thus, every edge only participates in lg n graphs.Finally, we observe that the distance parameter d that each data structure A is upper boundedby η/ 2. Thus, by the (super-)linear behavior of the function T ( m, n, d ), we have that the totalupdate time for all data structures in b O ( T ( m, n, η )). Further, we have by Theorem 9.5 that thedata structure maintaining ( V , τ ) can be implemented in time b O ( mn / ). The time required for allremaining operations is subsumed in both bounds. Returning the Paths. For any SCC X in V , we have that there is a data structure A on G ′ [ X ]that allows for SCC queries. Since by our previous discussion each such data structure runs with d at most | X | ηn and each edge on the path has weight at most ν (recall that G ′ only contains edgesof small weight), we can return the path from data structure A on query. 10 Conclusion In this article, we provide three new algorithms for decremental graphs: 1) a deterministic algorithmwith running time mn / o (1) that can answer SCC and SSR queries, 2) a deterministic algorithmwith running time n / o (1) that maintains SSSP and 3) a randomized (but adaptive) algorithmthat maintains matchings with near optimal running time ˜ O ( m ).Each of these algorithms is a significant improvement for the problem at hand, and especially theformer two algorithms improve on the long-standing upper bound of O ( mn ) by Even and Shiloach[ES81]. 54ur progress motivates the following related open questions:• Can we find deterministic algorithms for SSR, SCC and directed SSSP that run in near-linear time? For SSR and SCC such an algorithm is known when randomization is allowed[BPWN19]. For directed SSSP, even obtaining a randomized (non-adaptive) algorithm withnear-linear update time is a major open question (although this goal has been achieved forvery dense graphs [BGW20]). We also point out that while a randomized near-linear updatetime algorithm exists for undirected SSSP [HKN14a], even in this setting, the current bestdeterministic algorithms have running time mn / o (1) and ˜ O ( n ) [BC16, BC17, GWN20,BBG + ǫ )-approximate All-Pairs Shortest-Path problem with near-optimal total running time ˜ O ( mn )? Such an algorithmis currently only known in the randomized setting [Ber16], however, the best deterministicalgorithm runs in total update time ˜ O ( mn ) [DI04]. 11 Acknowledgements We are very grateful to Mira Bernstein for showing us how to lower-bound the increase of the entropypotential function in the directed cut-matching game, which is a crucial step in our framework.The first author would like to thank David Wajc for helping him work through the black-box in[Waj20], which allows us to convert our dynamic algorithm for fractional matching into one forintegral matching. We are grateful to Julia Chuzhoy for allowing us to apply the short-path oracleon expanders in Appendix E to our framework, which is crucial to obtain almost-path-length querytime. This result is by directly translating the same subroutine for undirected graphs shown in[CS20] to directed graphs using our new primitives for directed graphs.55 eferences [AHdLT05] Stephen Alstrup, Jacob Holm, Kristian de Lichtenberg, and Mikkel Thorup. Main-taining information in fully dynamic trees with top trees. ACM Trans. Algorithms ,1(2):243–264, 2005. 46[AKKW16] Saeed Akhoondian Amiri, Ken-ichi Kawarabayashi, Stephan Kreutzer, and Paul Wol-lan. The erdos-posa property for directed graphs. CoRR , abs/1603.02504, 2016. 2[AW14] Amir Abboud and Virginia Vassilevska Williams. Popular conjectures imply stronglower bounds for dynamic problems. In , pages434–443. IEEE Computer Society, 2014. 1[BBG + 20] Aaron Bernstein, Jan van den Brand, Maximilian Probst Gutenberg, DanuponNanongkai, Thatchaphol Saranurak, Aaron Sidford, and He Sun. Fully-dynamic graphsparsifiers against an adaptive adversary. arXiv preprint arXiv:2004.08432 , 2020. 55[BC16] Aaron Bernstein and Shiri Chechik. Deterministic decremental single source shortestpaths: beyond the o(mn) bound. In Proceedings of the 48th Annual ACM SIGACTSymposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21,2016 , pages 389–397, 2016. 55[BC17] Aaron Bernstein and Shiri Chechik. Deterministic partially dynamic single sourceshortest paths for sparse graphs. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms , pages 453–469. SIAM, 2017. 55[BC18] Aaron Bernstein and Shiri Chechik. Incremental topological sort and cycle detectionin expected total time. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Sym-posium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10,2018 , pages 21–34, 2018. 2[Ber16] Aaron Bernstein. Maintaining shortest paths under deletions in weighted directedgraphs. SIAM Journal on Computing , 45(2):548–574, 2016. 55[BFGT15] Michael A. Bender, Jeremy T. Fineman, Seth Gilbert, and Robert E. Tarjan. A newapproach to incremental cycle detection and related problems. ACM Trans. Algorithms ,12(2), December 2015. 2[BGW20] Aaron Bernstein, Maximilian Probst Gutenberg, and Christian Wulff-Nilsen. Near-optimal decremental sssp in dense weighted digraphs. In . IEEE, 2020. 1, 48, 50, 51,52, 55[BLSZ14] Bartlomiej Bosek, Dariusz Leniowski, Piotr Sankowski, and Anna Zych. Online bi-partite matching in offline time. In , pages384–393, 2014. 3[BPWN19] Aaron Bernstein, Maximilian Probst, and Christian Wulff-Nilsen. Decrementalstrongly-connected components and single-source reachability in near-linear time. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing ,pages 365–376, 2019. 1, 55, 75 56CC13] Chandra Chekuri and Julia Chuzhoy. Large-treewidth graph decompositions and ap-plications. In Symposium on Theory of Computing Conference, STOC’13, Palo Alto,CA, USA, June 1-4, 2013 , pages 291–300, 2013. 37[CC16] Chandra Chekuri and Julia Chuzhoy. Polynomial bounds for the grid-minor theorem. J. ACM , 63(5):40:1–40:65, 2016. 37[CE15] Chandra Chekuri and Alina Ene. The all-or-nothing flow problem in directed graphswith symmetric demand pairs. Math. Program. , 154(1-2):249–272, 2015. 2[CEP18] Chandra Chekuri, Alina Ene, and Marcin Pilipczuk. Constant congestion routing ofsymmetric demands in planar directed graphs. SIAM J. Discrete Math. , 32(3):2134–2160, 2018. 2[CGL + 20] Julia Chuzhoy, Yu Gao, Jason Li, Danupon Nanongkai, Richard Peng, andThatchaphol Saranurak. A deterministic algorithm for balanced cut with applicationsto dynamic connectivity, flows, and beyond. In . IEEE, 2020. 1, 3, 37, 38, 39, 43, 44[CHI + 16] Shiri Chechik, Thomas Dueholm Hansen, Giuseppe F. Italiano, Jakub Lacki, and NikosParotsidis. Decremental single-source reachability and strongly connected componentsin ˜o(m √ n) total update time. In IEEE 57th Annual Symposium on Foundations ofComputer Science, FOCS 2016, 9-11 October 2016, Hyatt Regency, New Brunswick,New Jersey, USA , pages 315–324, 2016. 1, 7, 9, 48, 72[CK19] Julia Chuzhoy and Sanjeev Khanna. A new algorithm for decremental single-sourceshortest paths with applications to vertex-capacitated flow and cut problems. In Pro-ceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing , pages389–400, 2019. 1, 2[CL16] Julia Chuzhoy and Shi Li. A polylogarithmic approximation algorithm for edge-disjointpaths with congestion 2. J. ACM , 63(5):45:1–45:51, 2016. 37[CQ17] Chandra Chekuri and Kent Quanrud. Approximating the held-karp bound for metricTSP in nearly-linear time. In , pages 789–800,2017. 1[CS20] Julia Chuzhoy and Thatchaphol Saranurak. Deterministic decremental shortest pathalgorithms via nearly optimal layered core decomposition. Unpublished , 2020. 1, 2, 9,55, 76[Dah16] Søren Dahlgaard. On the hardness of partially dynamic graph problems and connec-tions to diameter. In Ioannis Chatzigiannakis, Michael Mitzenmacher, Yuval Rabani,and Davide Sangiorgi, editors, , volume 55, pages 48:1–48:14, 2016. 3[DI04] Camil Demetrescu and Giuseppe F Italiano. A new approach to dynamic all pairsshortest paths. Journal of the ACM (JACM) , 51(6):968–992, 2004. 55[ES81] Shimon Even and Yossi Shiloach. An on-line edge-deletion problem. Journal of theACM (JACM) , 28(1):1–4, 1981. , 1, 54, 7457Fre85] Greg N. Frederickson. Data structures for on-line updating of minimum spanning trees,with applications. SIAM J. Comput. , 14(4):781–798, 1985. Announced at STOC’83. 1[GLN + ] Yu Gao, Jason Li, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, andSorrachai Yingchareonthawornchai. Deterministic graph cuts in subquadratic time:Sparse, balanced, and k-vertex. unpublished. 37, 38[GLS + 19] Fabrizio Grandoni, Stefano Leonardi, Piotr Sankowski, Chris Schwiegelshohn, and ShaySolomon. (1 + ǫ )-approximate incremental matching in constant deterministic amor-tized time. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on DiscreteAlgorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019 , pages 1886–1898, 2019. 3[GP13] Manoj Gupta and Richard Peng. Fully dynamic (1+ e)-approximate matchings. In , pages 548–557. IEEE Computer Society, 2013.3[Gup14] Manoj Gupta. Maintaining approximate maximum matching in an incremental bi-partite graph in polylogarithmic update time. In , pages 227–239, 2014. 3[GW20] Maximilian Probst Gutenberg and Christian Wulff-Nilsen. Decremental SSSP inweighted digraphs: Faster and against an adaptive adversary. In Shuchi Chawla, ed-itor, Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA2020, Salt Lake City, UT, USA, January 5-8, 2020 , pages 2542–2561. SIAM, 2020. 1,2, 48, 50, 52[GWN20] Maximilian Probst Gutenberg and Christian Wulff-Nilsen. Deterministic algorithmsfor decremental approximate shortest paths: Faster and simpler. In Proceedings of theFourteenth Annual ACM-SIAM Symposium on Discrete Algorithms , pages 2522–2541.SIAM, 2020. 55[GWW20] Maximilian Probst Gutenberg, Virginia Vassilevska Williams, and Nicole Wein. Newalgorithms and hardness for incremental single-source shortest paths in directed graphs.In Symposium on Theory of Computing , 2020. 2[HdLT01] Jacob Holm, Kristian de Lichtenberg, and Mikkel Thorup. Poly-logarithmic determin-istic fully-dynamic algorithms for connectivity, minimum spanning tree, 2-edge, andbiconnectivity. J. ACM , 48(4):723–760, 2001. 1[HK73] John E Hopcroft and Richard M Karp. An nˆ5/2 algorithm for maximum matchingsin bipartite graphs. SIAM Journal on computing , 2(4):225–231, 1973. 10[HK95] Monika Rauch Henzinger and Valerie King. Fully dynamic biconnectivity and transitiveclosure. In , pages 664–672, 1995. 1[HK99] Monika Rauch Henzinger and Valerie King. Randomized fully dynamic graph algo-rithms with polylogarithmic time per operation. J. ACM , 46(4):502–516, 1999. 1,74 58HKK19] Meike Hatzel, Ken-ichi Kawarabayashi, and Stephan Kreutzer. Polynomial planardirected grid theorem. In Proceedings of the Thirtieth Annual ACM-SIAM Symposiumon Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019 ,pages 1465–1484, 2019. 2[HKM + 12] Bernhard Haeupler, Telikepalli Kavitha, Rogers Mathew, Siddhartha Sen, andRobert E. Tarjan. Incremental cycle detection, topological ordering, and strong com-ponent maintenance. ACM Trans. Algorithms , 8(1):3:1–3:33, January 2012. 2[HKN14a] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Decrementalsingle-source shortest paths on undirected graphs in near-linear total update time.In , pages 146–155, 2014. 55[HKN14b] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Sublinear-timedecremental algorithms for single-source reachability and shortest paths on directedgraphs. In Symposium on Theory of Computing, STOC 2014, New York, NY, USA,May 31 - June 03, 2014 , pages 674–683, 2014. 1[HKN15] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Improved algo-rithms for decremental single-source reachability on directed graphs. In Automata,Languages, and Programming - 42nd International Colloquium, ICALP 2015, Kyoto,Japan, July 6-10, 2015, Proceedings, Part I , pages 725–736, 2015. 1[HKNS15] Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and Thatchaphol Sara-nurak. Unifying and strengthening hardness for dynamic problems via the onlinematrix-vector multiplication conjecture. In Proceedings of the Forty-Seventh AnnualACM on Symposium on Theory of Computing, STOC 2015, Portland, OR, USA, June14-17, 2015 , pages 21–30, 2015. 1, 3[HRW17] Monika Henzinger, Satish Rao, and Di Wang. Local flow partitioning for faster edgeconnectivity. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium onDiscrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 16-19 ,pages 1919–1938, 2017. 64, 66[IKLS17] Giuseppe F. Italiano, Adam Karczmarz, Jakub Lacki, and Piotr Sankowski. Decremen-tal single-source reachability in planar digraphs. In Hamed Hatami, Pierre McKenzie,and Valerie King, editors, Proceedings of the 49th Annual ACM SIGACT Symposiumon Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017 , pages1108–1121. ACM, 2017. 1[JRST01] Thor Johnson, Neil Robertson, Paul D. Seymour, and Robin Thomas. Directed tree-width. J. Comb. Theory, Ser. B , 82(1):138–154, 2001. 2[KK15] Ken-ichi Kawarabayashi and Stephan Kreutzer. The directed grid theorem. In Pro-ceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing,STOC 2015, Portland, OR, USA, June 14-17, 2015 , pages 655–664, 2015. 2[KKOV07] Rohit Khandekar, Subhash Khot, Lorenzo Orecchia, and Nisheeth K Vishnoi. On acut-matching game for the sparsest cut problem. Univ. California, Berkeley, CA, USA,Tech. Rep. UCB/EECS-2007-177 , 2007. 3, 38, 39, 4459KPP16] Tsvi Kopelowitz, Seth Pettie, and Ely Porat. Higher lower bounds from the 3sumconjecture. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium onDiscrete Algorithms, SODA 2016, Arlington, VA, USA, January 10-12, 2016 , pages1272–1287, 2016. 3[KRV09] Rohit Khandekar, Satish Rao, and Umesh V. Vazirani. Graph partitioning using singlecommodity flows. J. ACM , 56(4):19:1–19:15, 2009. 3, 37, 38[Lac11] Jakub Lacki. Improved deterministic algorithms for decremental transitive closureand strongly connected components. In Dana Randall, editor, Proceedings of theTwenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011,San Francisco, California, USA, January 23-25, 2011 , pages 1438–1445. SIAM, 2011.7, 9, 48, 72[Lou10] Anand Louis. Cut-matching games on directed graphs. CoRR , abs/1010.1047, 2010.3, 38[LR04] Kevin J. Lang and Satish Rao. A flow-based method for improving the expansionor conductance of graph cuts. In Integer Programming and Combinatorial Optimiza-tion, 10th International IPCO Conference, New York, NY, USA, June 7-11, 2004,Proceedings , pages 325–337, 2004. 64[Mad10] Aleksander Madry. Faster approximation schemes for fractional multicommodity flowproblems via dynamic graph algorithms. In Proceedings of the 42nd ACM Symposiumon Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA, 5-8 June2010 , pages 121–130, 2010. 1[MMP + 19] Tom´as Masar´ık, Irene Muzi, Marcin Pilipczuk, Pawe l Rza˙zewski, and Manuel Sorge.Packing directed circuits quarter-integrally. In , pages72:1–72:13, 2019. 2[NS17] Danupon Nanongkai and Thatchaphol Saranurak. Dynamic spanning forest with worst-case update time: adaptive, Las Vegas, and O ( n / − ǫ )-time. In Proceedings of the 49thAnnual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal,QC, Canada, June 19-23, 2017 , pages 1122–1129, 2017. 1, 2, 26, 37[NSW17] Danupon Nanongkai, Thatchaphol Saranurak, and Christian Wulff-Nilsen. Dynamicminimum spanning forest with subpolynomial worst-case update time. In FOCS , pages950–961. IEEE Computer Society, 2017. 1, 2, 3, 26[OA14] Lorenzo Orecchia and Zeyuan Allen Zhu. Flow-based algorithms for local graph clus-tering. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on DiscreteAlgorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014 , pages 1267–1286,2014. 64[OSVV08] Lorenzo Orecchia, Leonard J. Schulman, Umesh V. Vazirani, and Nisheeth K. Vishnoi.On partitioning graphs via single commodity flows. In Proceedings of the 40th AnnualACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, May17-20, 2008 , pages 461–470, 2008. 3760PD04] Mihai Patrascu and Erik D. Demaine. Lower bounds for dynamic connectivity. In Proceedings of the 36th Annual ACM Symposium on Theory of Computing, Chicago,IL, USA, June 13-16, 2004 , pages 546–553, 2004. 1[Ree99] Bruce A. Reed. Introducing directed tree width. Electron. Notes Discret. Math. , 3:222–229, 1999. 2[RRST96] Bruce A. Reed, Neil Robertson, Paul D. Seymour, and Robin Thomas. Packing directedcircuits. Combinatorica , 16(4):535–554, 1996. 2[RST14] Harald R¨acke, Chintan Shah, and Hanjo T¨aubig. Computing cut-based hierarchi-cal decompositions in almost linear time. In Proceedings of the Twenty-Fifth AnnualACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA,January 5-7, 2014 , pages 227–238, 2014. 37[ST83] Daniel Dominic Sleator and Robert Endre Tarjan. A data structure for dynamic trees. J. Comput. Syst. Sci. , 26(3):362–391, 1983. 65[SW19] Thatchaphol Saranurak and Di Wang. Expander decomposition and pruning: Faster,stronger, and simpler. In SODA , pages 2616–2635. SIAM, 2019. 3, 37, 44, 64, 66[Tar72] Robert Tarjan. Depth-first search and linear graph algorithms. SIAM journal oncomputing , 1(2):146–160, 1972. 52[Tho00] Mikkel Thorup. Near-optimal fully-dynamic graph connectivity. In Proceedings of theThirty-Second Annual ACM Symposium on Theory of Computing, May 21-23, 2000,Portland, OR, USA , pages 343–350, 2000. 1[Waj20] David Wajc. Rounding dynamic matchings against an adaptive adversary. In Proceed-ings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing , pages194–207, 2020. 10, 11, 55, 63, 64[Wul17] Christian Wulff-Nilsen. Fully-dynamic minimum spanning forest with improved worst-case update time. In Proceedings of the 49th Annual ACM SIGACT Symposium onTheory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017 , pages1130–1143, 2017. 1, 2 61 Proofs Omitted From Main Body of Conference Submission In this section, we fill in some of the proofs that were omitted in the main body of the paper. A.1 Analysis of Algorithm 1 In this section, we give the complete analysis of our decremental SCC algorithm (Algorithm 1)from Section 4. We in particular show that it satisfies the bounds of Theorem 1.1. Correctness Analysis We need to show that after the algorithm finishes processing an update,the sets C , ..., C k ∈ C are precisely the SCCs of G ∗ [ V ∗ \ ˆ S ].First we show that each G [ C i ] is strongly connected in G ∗ [ V ∗ \ ˆ S ]. We know that Robust-Witness in Line 5 maintains a large witness W i for C i , since otherwise it would have decomposed C i intosmaller parts (Line 10). Similarly, the fact that Forest-From-Witness ( C i , W i , φ ∗ ) did not de-compose C i in Line 19 implies that every vertex in V ( G ) is strongly connected to W i (see invariantin Theorem 4.4). Since W i is itself strongly connected (because it is an expander), all of C i isstrongly connected.We now show by induction that no pair C i , C j ∈ C are strongly connected. This clearly holds atthe beginning since C starts with a single element. Now, there are two lines in which C can change:Line 10 and Line 19. In both cases, C is replaced with C ′ and C ′′ , where C ′ = L and C ′′ = R forsome vertex-cut ( L, S, R ) in C . By definition of vertex-cut, L and R are not strongly connectedin G ∗ [ C \ S ]. Since S is added to ˆ S , it is easy to check that L and R will also be not stronglyconnected in G ∗ [ V ∗ \ ˆ S ].Finally, we show that the input conditions to each of the subroutines is satisfied. Firstly, all up-dates to Path-Inside-Expander ( W ) come from changes made to W by Robust-Witness ( G, φ ∗ );the latter always ensures that W is a witness, so it is always a 1 /n o (1) -expander, as required by Path-Inside-Expander ( W ). Secondly, Robust-Witness ( G, φ ∗ ) always maintains a large wit-ness, so in Forest-From-Witness ( G, W, φ ∗ ), we always obey the promise that V ( W ) ≥ n/ Update-Time Analysis For any v ∈ V ∗ , define X ( v ) to be the number of calls SCC-Helper ( G )for which v ∈ V ( G ). The key to our analysis is to show that X ( v ) = n o (1) ∀ v ∈ V ∗ . To see this, con-sider any call SCC-Helper ( G ) for which v ∈ V ( G ), other than the initial call SCC-Helper ( G ∗ ).This call could only have been created in Line 10 or Line 19 of an earlier call SCC-Helper ( G ′ ). Itis easy to see from the algorithm that the call SCC-Helper ( G ′ ) must have terminated as soon as SCC-Helper ( G ) was created. We now complete the claim by arguing that | V ( G ) | ≤ (1 − α ) | V ( G ′ ) | ,for some parameter α = 1 /n o (1) . To see this consider two cases. The first is that SCC-Helper ( G )was created in Line 10 of SCC-Helper ( G ′ ). In this case G is equal to G ′ [ L ] or G ′ [ R ] for somevertex cut ( L, S, R ) in G ′ . Theorem 4.3 guarantees that this vertex-cut is (1 /n o (1) )-balanced, sowe have the desired | V ( G ) | ≤ (1 − /n o (1) ) | V ( G ′ ) | . The second case is that SCC-Helper ( G ) wascreate in Line 19 of SCC-Helper ( G ′ ). In this case G = G ′ [ L ] for some vertex cut ( L, S, R ) in G ′ ;by definition of vertex-cut, we have | L | ≤ | V ( G ′ ) | / 2, as desired.Now consider the total running time of the three subroutines in SCC-Helper ( G ): Robust-Witness ( G, φ ∗ ), Path-Inside-Expander ( W ) and Forest-From-Witness ( G, W, φ ∗ ). The first subroutine has atotal update time of b O ( | E ( G ) | / ( φ ∗ ) ) = b O ( | E ( G ) | · n / ), where n = V ∗ . The second has totalupdate time b O ( | E ( G ) | ) (Theorem 4.5), but it must be reset every time Robust-Witness ( G, φ ∗ )enters a new phase (Line 14): since the total number of phases is b O (1 /φ ∗ ) (Theorem 4.3), thetotal update time for Path-Inside-Expander in the call to SCC-Helper ( G ) is b O ( | E ( G ) | /φ ∗ ) =62 O ( | E ( G ) | n / ). Finally, Forest-From-Witness ( G, W, φ ∗ ) has total update time b O | E ( G ) | /φ ∗ (Theorem 4.4); multiplying by b O (1 /φ ∗ ) phases yields total update time b O ( | E ( G ) | · n / ).The total update time for a single call SCC-Helper ( G ) is thus b O ( | E ( G ) | · n / ). This canclearly be upper bounded by b O ( n / P v ∈ V ( G ) deg( v )), where deg( v ) is the degree of v in the maingraph G ∗ at time zero (before any deletions). It is thus easy to check that the total update time of all SCC-Helper ( G ) is at most b O ( n / P v ∈ V ∗ deg( v ) · X ( v )). Since we showed at the beginning of theproof that X ( v ) = n o (1) , we have a total update time of b O ( n / · n o (1) · P v ∈ V ∗ deg( v )) = b O ( mn / ),as desired.The final component of the total update time is the quantity O ( m | ˆ S | ) from Proposition 4.1,where | ˆ S | refers to the largest size that ˆ S ever reaches. We complete the proof by showing that wealways have | ˆ S | = b O ( n / ). To see this, not that SCC-Helper ( G ) only adds to ˆ S in lines 10 or 19.In either case, it adds the set S from a vertex cut ( L, S, R ) and in either case the vertices in L joina new call SCC-Helper ( G [ L ]). Moreover, the vertex cut is always b O ( φ ∗ )-sparse (by Theorems 4.3and 4.4), so we have | S | = b O ( | L | φ ∗ ). Thus, if we give a vertex a token every time in participatesin some new SCC-Helper ( G ), then we can charge every vertex in ˆ S to b Ω(1 /φ ∗ ) tokens. Sincewe have X ( v ) = n o (1) for all v ∈ V ∗ , we can conclude that the total number of tokens is b O ( n ), so | ˆ S | = b O ( nφ ∗ ) = n / . Query-Time Analysis By Proposition 4.1, all we need to show is that each SCC-Helper ( G )has almost path-length query time. Say that the query is from u to v in some G ∈ ( C ). Let W be thewitness maintained by Robust-Witness ( G, φ ∗ ) in Line 5. We use Forest-From-Witness ( G, W, φ ∗ )to find paths P uW = u → w and P W v = w → v for some w , w ∈ W (if u or v are in W , the corre-sponding path is empty.) We then use Path-Inside-Expander ( W ) to find a path P W = w → w in E ( w ); using the embedding of W into G , P W can easily be transformed into a path P G in E ( G ).We then return the path P = P uW ◦ P G ◦ P W v . It is not hard to show that the resulting query timeis b O ( | P | ). The issue that the path P might not be simple. We can always find a simple path P ′ inside P , but if | P ′ | << | P | , then the time we spent is not proportional to P ′ .To guarantee that we return a simple path in almost path-length query-time, we need a moreclever query procedure. The details are in Section 8. A.2 Proof of Theorem 1.3 In this section, we show that our main theorem for decremental matching (Theorem 1.3) followseasily from Algorithm Robust-Matching (Lemma 5.1). Proof of Theorem 1.3. We start with a deterministic algorithm that maintains a fractional match-ing . The algorithm is as follows. Initialize µ ∗ = n . The algorithm runs Robust-Matching ( G, µ ∗ )to maintain the matching M . When Robust-Matching terminates, multiply µ ∗ by (1 − ǫ ) andagain run Robust-Matching ( G, µ ∗ ). Terminate when µ ∗ < Robust-Matching ( G, µ ∗ ) O (log( n ) /ǫ ) times, which yields the desired totalupdate time of O ( m log ( n ) /ǫ ). If M is the matching maintained by some Robust-Matching ( G, µ ∗ ),then by Lemma 5.1, val( M ) ≥ µ ∗ (1 − ǫ ). If µ ∗ = n , we clearly have a (1 − ǫ )-approximate match-ing. Else, since µ ∗ < n , we know that Robust-Matching ( G, µ ∗ / (1 − ǫ )) already terminated, so µ ( G ) ≤ µ ∗ , so M is a (1 − ǫ ) approximate matching.Finally, to obtain an integral matching, we plug in the above result to the black-box result ofWajc [Waj20] for converting dynamic fractional matching into dynamic integral matching. ConsiderTheorem 3.7 [Waj20]. We have just showed an algorithm with T f ( n, m ) = O (log( n ) /ǫ ). Weset γ = 3; As indicated in Section 2 of [Waj20], we then have T c ( n, m ) = O (1) using a simple63andomized algorithm for 3 δ edge-coloring that works against an adaptive adversary. Finally, weset d = O (log(1 /ǫ ) /ǫ ) as in Lemma 4.5 of [Waj20]. By Theorem 3.7 of [Waj20], the update time ofthe resulting algorithm is then O ( T f ( n, m ) · T c ( n, m ) + log( n/ǫ ) · γ · d/ǫ = O (log ( n ) /ǫ + log( n/ǫ ) · log(1 /ǫ ) /ǫ . Since we always set ǫ = Ω(1 /n ), our amortized update time for integral matching isthe same as for fractional matching: O (log ( n ) /ǫ ).Note that as a result of this conversion the algorithm becomes randomized, but still worksagainst an adaptive adversary. B Implementation of Flow Subroutines Throughout this paper, we use several flow subroutines for various contexts (e.g. expander pruning,embedding robust witness, finding approximate matching, etc.). All these flow algorithms arebased on the same techniques which is the bounded height variant of push-relabel and blockingflow algorithms. This idea was used explicitly many times before (e.g. [LR04, OA14, HRW17,SW19]). Our contribution in this section is only to show a uniform presentation that all of our flowsubroutines can be implemented using the same framework, and to give proofs for completeness.We start with introducing notations in Appendix B.1, then we describe the guarantee of thebounded height variant of push-relabel and blocking flow algorithms in Appendix B.2. All the flowsubroutines share the same framework which is described in Appendix B.3. Using this framework,we instantiate the flow subroutines that we need throughout the paper in Appendix B.4. B.1 Flow Notations The notation below is slight adjusted from [SW19] because we work with directed graphs insteadof undirected graph.A flow problem Π on a directed graph G = ( V, E ) is specified by a source function ∆ : V → R ≥ ,a sink capacity function T : V → R ≥ , and edge capacities c : E → R ≥ . We say that Π is integral if ∆ : V → Z ≥ , T : V → Z ≥ , and c : E → Z ≥ . More generally, for any number d ≥ 1, we saythat Π is 1 /d -integral if ∆ : V → d Z ≥ , T : V → d Z ≥ , and c : E → d Z ≥ . We use mass to referto the substance being routed. For a vertex v , ∆( v ) specifies the amount of mass initially placedon v , and T ( v ) specifies the capacity of v as a sink. For an edge e , c ( e ) bounds how much mass canbe routed along the edge.A routing (or flow ) f : E → R ≥ is 1 /d -integral if f : E → d Z ≥ . f ( u, v ) > u to v . If f ( u, v ) = c ( u, v ), then we say ( u, v ) is saturated . Given ∆,we also treat f as a function on vertices, where f ( v ) = ∆( v ) + P u f ( u, v ) is the amount of massending at v after the routing f . If f ( v ) ≥ T ( v ), then we say v ’s sink is saturated . For A, B ⊂ V ,let f ( A, B ) = P ( u,v ) ∈ E ∩ ( A × B ) f ( u, v ) be the total mass routing directly from A to B .We say that f is a feasible routing/flow for Π if f ( u, v ) ≤ c ( u, v ) for each edge e = ( u, v )(i.e. obey edge capacities), P u f ( v, u ) ≤ ∆( v ) for each v (i.e. the net amount of mass routed awayfrom a vertex can be at most the amount of its initial mass), and f ( v ) ≤ T ( v ) for each v (i.e. noexcess flow on each vertex).Given a flow problem Π = (∆ , T, c ), a pre-flow f is a feasible routing for Π except the condition ∀ v : f ( v ) ≤ T ( v ) may not be satisfied. As pre-flow may not obey sink capacity on vertices, wedefine the absorbed mass on a vertex v as ab f ( v ) = min( f ( v ) , T ( v )). We have ab f ( v ) = T ( v ) iff v ’ssink is saturated. The excess on v is ex f ( v ) = f ( v ) − ab f ( v ). From the definition, when there is noexcess, ∀ v : ex f ( v ) = 0, then f is a feasible flow for Π. Intuitively, we think of ∆( v ) − T ( v ) as initialexcess at v , and ex f ( v ) is the excess at v after routing f . Similarly, we think of min { ∆( v ) , T ( v ) } as initial absorbed mass at v and ab f ( v ) is the absorbed mass at v after routing f . For any S ⊆ V ,64e usually write x ( S ) = P v ∈ S x ( v ) where x can be from { ∆ , T, ex f , ab f } (e.g. ∆( S ), ab f ( S )). Weomit the subscript whenever it is clear.For any directed path P , let | P | denote the number of edges in P . A path-decomposition ofa pre-flow f is a collection P f of directed paths with value val( P ) > P ∈ P f and P P ∈P f | P ∋ e val( P ) = f ( e ) for all e ∈ E . B.2 Bounded Height Push-Relabel and Blocking Flow The following proposition is the key algorithmic component for the whole section. Proposition B.1. There is an algorithm that, given a directed n -vertex m -edge graph G = ( V, E ) ,a height parameter h ≥ , and a flow problem Π = (∆ , T, c ) , returns a preflow f together with labelson vertices l : V → { , . . . , h } such that:1. If l ( u ) > l ( v ) + 1 and ( u, v ) ∈ E , then the mass on ( u, v ) is saturated, i.e., f ( u, v ) = c ( u, v ) .2. If l ( u ) > l ( v ) + 1 and ( v, u ) ∈ E , then the mass on ( v, u ) is empty, i.e., f ( v, u ) = 0 .3. If l ( v ) < h , then v has no excess, i.e. ex f ( v ) = 0 .4. If l ( v ) > , then v ’s sink is saturated, i.e. ab f ( v ) = T ( v ) .5. After routing f , excess does not increase and absorbed mass never decreases, i.e. ex f ( v ) ≤ max { ∆( v ) − T ( v ) , } and ab f ( v ) ≥ min { ∆( v ) , T ( v ) } for all v .The algorithm takes at most O ( mh log m ) time. If Π is /d -integral for some number d ≥ , thenso is f . If Π is integral, T ( v ) ≥ deg( v ) for all v ∈ V , and the algorithm can access the adjacencylist of every vertex, then the running time can be reduced to O (∆( V ) h ) .Proof. The statement simply summarizes the output that one can obtain from performing blockingflow computations for ≈ h rounds (instead of ≈ n rounds as when we want to solve the exact maxflow problem).We explain this idea in more detail. Let us create a graph G ′ from the graph G by adding asuper source vertex s and a super sink vertex t . For each u ∈ V , we add an edge ( s, u ) with capacity∆( u ). For each u ∈ V , we add an edge ( u, t ) with capacity T ( u ). Then, we run blocking flow forat most h + 2 rounds until the (unweighted) distance between s and t in the residual graph G ′ f of G ′ is at least h + 2. Each blocking flow computation takes O ( m log m ) time (even when the flowproblem Π is fractional). This running time is possible by using the link-cut tree data structure .(See the detail in Section 6 of [ST83], Page 387-389.) So the total running time is O ( mh log m ).This completes the running time analysis.Let f be the flow in G ′ obtained after the blocking flow computations. We can define the vertexlabeling l : V ∪ { s, t } → { , . . . , h } as follows. For 0 ≤ i ≤ h , we set l ( u ) = h − i + 1 where i isthe (unweighted) distance between s and u in G ′ f . For all vertices u whose distance from s is morethan h , we set l ( u ) = 0. By definition, l ( s ) = h + 1. Also, as the distance from s to t in G ′ f is h + 2, for each label i ∈ { , . . . , h } , there must exist a vertex with label i . Observe that, for anyedge ( u, v ) where | l ( u ) − l ( v ) | > 1, the residual capacity in G ′ f must be c f ( u, v ) = 0. So this impliesItem 1 and Item 2.Note that we can view f as a preflow on G by restricting f to only edges of G . Observe thatthe flow value on ( u, t ) in G ′ corresponds to the absorbed flow at u in G , i.e. f ( u, t ) = ab f ( u ).Also, the residual capacity c f ( s, u ) of ( s, u ) in G ′ f corresponds to the excess at u after routing f ,65.e. c f ( s, u ) = ∆( u ) − f ( s, u ) = ex f ( u ). So if l ( u ) < h , then c f ( s, u ) = 0 and so ex f ( u ) = 0. Also, if l ( u ) > 0, then c f ( u, t ) = 0 and so ab f ( u ) = T ( u ). This implies Item 3 and Item 4.In fact, our algorithm will do some simple preprocessing. For each u , we will assume that we startwith the initial flow that go through ( s, u ) and ( u, t ) with value min { ∆( u ) , T ( u ) } . So initially thereare min { ∆( u ) , T ( u ) } unit of mass absorbed at u and the initial excess at u is max { ∆( u ) − T ( u ) , } .As the blocking flow computations have a property that flow value incident to s and t neverdecreases. This implies Item 5. This completes the correctness of the algorithm with runningtime O ( mh log m ). Note that the implementation of blocking flow using link-cut tree also have theguarantee that flow value on each edge is 1 /d -integral if the given flow problem Π is 1 /d -integral.Lastly, we need to show if Π is integral, T ( v ) ≥ deg( v ) for all v ∈ V , and the algorithmcan access the adjacency list of every vertex, then the running time is O (∆( V ) h ). However, theproposition here is simply the summary of the output by the Unit Flow algorithm by Henzinger Raoand Wang [HRW17] (see also [SW19]) where Unit Flow is a bounded height variant of push-relabelalgorithms. Lemma B.2. Given a preflow f from the algorithm from Proposition B.1, any path decomposition P f of f satisfies P P ∈P f val( P ) | P | ≤ ∆( V ) h . Moreover, if the flow problem Π is /d -integral, thena decomposition P f can be computed in time O ( P P ∈P f | P | ) = O ( d ∆( V ) h ) .Proof. By the definition of path decomposition, we have P P ∈P f | P ∋ e val( P ) = f ( e ) for all e ∈ E .So X P ∈P f val( P ) | P | = X e ∈ E X P ∈P f | P ∋ e val( P ) = X e ∈ E f ( e ) . Now, we want to show that P e ∈ E f ( e ) ≤ ∆( V ) h . Consider the bounded height blocking-flowalgorithm or push-relabel algorithm. The algorithm always sends the flow along a path of length atmost h in the residual graph. So even the total flow value over all edges “without flow cancellation”must be at most ∆( V ) h . As P e ∈ E f ( e ) is the total flow value over all edges “after flow cancellation”,we conclude that P e ∈ E f ( e ) ≤ ∆( V ) h .We can find a path decomposition P f of a flow f in time O ( P P ∈P f | P | ) as follows. As the flowproblem Π is 1 /d -integral, the flow value of each edge f ( e ) is also 1 /d -integral by Proposition B.1.Let H be a graph induced by edges e where positive flow value f ( e ) > 0. We make H unweightedby scaling up all edges in H by a factor of d . Then, we add a dummy source to H and performingthe depth-first search on H from the dummy source. Whenever a search reaches a sink u (i.e.ab f ( u ) > 0) or the search cannot proceed from u (i.e. ex f ( u ) > P excluding the dummy source. Note that P is a directed simple path in H and corresponds to a flow path of value 1 /d . We remove the path P from H and repeat.Observe that each edge in H is read at most twice and so the total time is subsumed by thetotal time for outputting all paths which is O ( P P ∈P f | P | ). As val( P ) ≥ /d , so O ( P P ∈P f | P | ) ≤ O ( P P ∈P f d · val( P ) | P | ) = O ( d ∆( V ) h ).For convenience, we will use the following notation throughout this section. Definition B.3. Given a vertex labeling l : V → { , . . . , h } , let V i = { u | l ( u ) = i } for each i .Also, we define V ≥ i = { u | l ( u ) ≥ i } and V >i , V ≤ i , V
Given a flow problem Π = (∆ , T, c ) for a graph G = ( V, E ), all the algorithms in Appendix B.4starts by calling Proposition B.1 with parameter h and obtain a preflow f and a vertex labeling l : V → { , . . . h } .If the total excess after routing f is at most z , then the algorithm just returns f and we aredone. Otherwise, the algorithm will return one of the level- i cuts ( V ≥ i , V f ( u, v ) = c ( u, v ). So the total capacity c ( E ( V ≥ i , V
B.1 . Let f be a preflow returned by any of the algorithms below in this section. Notethat every algorithm below starts by calling the algorithm from Proposition B.1 with parameter h on some graph. Note that any vertex ex f ( v ) > v ) − T ( v ) > 0. This follows from Item 5 of Proposition B.1. By simply scanningvertices with initial excess and removing excess after routing f , we obtain a feasible flow f ′ of value∆( V ) − ex f ( v ) from f . The time to remove these excess is obviously subsumed by the algorithmbecause the algorithm at least need to read all vertices with initial excess. Moreover, we can obtaina path decomposition of f in additional time O ( d ∆( V ) h ) if the flow problem is 1 /d -integral byLemma B.2. B.4.1 Local Flow The algorithm below either sends most of the flow or finds a balanced sparse cut in local time. Weneed that the given flow problem is integral and each vertex can absorb mass at least by its degree. Lemma B.7 (Local Flow) . There is a deterministic algorithm that, given access to the adjacencylist of every vertex of a directed m -edge graph G = ( V, E ) , parameters z ≥ and h ≥ , and an integral flow problem Π = (∆ , T, c ) with total capacity C = P e ∈ E c ( e ) where1. ∀ v, ∆( v ) ≤ ∆ deg( v ) and T ( v ) ≥ deg( v ) where deg( v ) denote an unweighted degree of v in G .in O (∆( V ) · h ) time either • returns a preflow f with total excess ex f ( V ) ≤ z , or • returns a set S where z/ ∆ < vol( S ) ≤ ∆( V ) and c ( E ( S, V \ S )) ≤ ∆( S ) − T ( S ) − z + vol c ( S ) · 10 log Ch .Proof. We call Proposition B.1 with parameter h . By the assumption of the lemma, the runningtime is O (∆( V ) h ). Suppose that ex f ( V ) > z otherwise we are done. By Proposition B.5, we knowvol( V h )∆ ≥ ∆( V h ) > z because ∀ v ∈ V, ∆( v ) ≤ ∆ deg( v ). So vol( V h ) > z/ ∆. Also, observe thatvol( V ≥ ) ≤ ∆( V ). This is because all vertices in V ≥ are fully absorbed by Item 4 of Proposition B.1,so T ( V ≥ ) ≤ ab f ( V ≥ ) ≤ ∆( V ), and because T ( V ≥ ) ≥ vol( V ≥ ) as T ( v ) ≥ deg( v ) for all v .By the ball growing argument, there is an index 0 < i ≤ h such that c ( E ( V i , V i − ) ∪ E ( V i − , V i )) ≤ min { vol c ( V ≥ i ) , vol c ( V
10 log Ch . Otherwise, vol c ( V ≥ ) ≥ (1 + 10 log Ch ) h > C which is a contra-diction. We fix such i . Set S = V ≥ i . As 0 < i ≤ h , so we have z/ ∆ < vol( S ) ≤ ∆( V ). We have, byProposition B.6, that c ( E ( S, V \ S )) = c ( E ( V i , V i − )) + c ( E ( V ≥ i , V
Lemma B.8 (Global Flow) . There is a deterministic algorithm that, given a directed m -edge graph G = ( V, E ) , excess parameter z ≥ , a height parameter h ≥ , and a flow problem Π = (∆ , T, c ) with total capacity C = P e ∈ E c ( e ) where1. ∆( V ) ≤ T ( V ) ,2. ∀ v ∈ V, ∆( v ) , T ( v ) ≤ ,in O ( mh log m ) time, either • returns a preflow f with total excess ex f ( V ) ≤ z , or • returns a set S ⊂ V where | S | , | V \ S | > z and c ( E ( S, V \ S )) ≤ ∆( S ) − T ( S ) − z +min { vol c ( S ) , vol c ( V \ S ) } · 10 log Ch .Proof. We call Proposition B.1 with parameter h in O ( mh log m ). Suppose that ex f ( V ) > z . ByProposition B.5, we know | V h | ≥ ∆( V h ) > z and | V | ≥ T ( V ) > z because ∀ v ∈ V, ∆( v ) , T ( v ) ≤ < i ≤ h suchthat c ( E ( V i , V i − ) ∪ E ( V i − , V i )) ≤ min { vol c ( V ≥ i ) , vol c ( V
10 log Ch . Otherwise, vol c ( V ≥ h/ ) ≥ (1 + 10 log Ch ) h/ > C which is a contradiction. We fix such i . Set S = V ≥ i . As 0 < i ≤ h , so wehave | S | , | V \ S | > z . We have, by Proposition B.6, that c ( E ( S, V \ S )) = c ( E ( V i , V i − )) + c ( E ( V ≥ i , V
The algorithm below is for computing approximate bipartite matching. That is why the graph G = ( L, R, E ) is bipartite and only has edges from L to R . The algorithm either send at least∆( V ) − z flow (i.e. large fractional matching) or find a cut S such that the residual capacity is atmost 2 ∆( V ) − zh . So this gives a 2 /h -approximation algorithm for bipartite matching. Lemma B.9 (Global Flow for Matchings) . There is a deterministic algorithm that, given a directedbipartite m -edge graph G = ( V = ( L, R ) , E ) where E ⊆ L × R , an excess parameter z ≥ , a heightparameter h ≥ , and a flow problem Π = (∆ , T, c ) and the following holds1. ∆( R ) = 0 , T ( L ) = 0 , ∆( L ) ≤ T ( R ) ,2. ∀ v ∈ V, ∆( v ) , T ( v ) ≤ ,in O ( mh log m ) time, either • returns a preflow f with total excess ex f ( L ∪ R ) ≤ z , or returns a set S ⊂ V ( G ) where | S | , | V \ S | > z and c ( E ( S, V \ S )) ≤ ∆( S ) − T ( S ) − z +2 · ∆( V ) − zh .Proof. We call Proposition B.1 with parameter h in O ( mh log m ). Suppose that ex f ( V ) > z otherwise we are done. By Proposition B.5, we know | V h | ≥ ∆( V h ) > z and | V | ≥ T ( V ) >z because ∀ v ∈ V, ∆( v ) , T ( v ) ≤ 1. By Lemma B.4, V h , V h − , V h − , . . . are subsets of L and V h − , V h − , V h − , . . . are subsets of R . So P i ≥ f ( V h − i , V h − i +1 ) ≤ f ( L, R ) ≤ ∆( V ) − z . Sothere is 1 ≤ i ≤ h/ f ( V h − i , V h − i +1 ) ≤ V ) − z ) /h . Fix such i and set S = V >h − i .As S ⊇ V h and S ∩ V = ∅ we have that | S | , | V \ S | > z . We have, by Proposition B.6, that c ( E ( S, V \ S )) = c ( E ( V h − i +1 , V h − i )) + c ( E ( V >h − i , V ≤ h − i ) \ E ( V h − i +1 , V h − i )) ≤ ∆( V >h − i ) + f ( V h − i , V h − i +1 ) − ab f ( V >h − i ) − ex f ( V >h − i ) < ∆( S ) − T ( S ) − z + 2(∆( V ) − z ) h where the first inequality is because E ( V h − i +1 , V h − i ) = ∅ as V h − i +1 ⊂ R and V h − i ⊂ L and thesecond inequality is by the choice of i .This immediately implies the subroutine that we need in Section 5.1. We simply plug in theparameters correctly. Lemma 5.2. There exists an algorithm Matching-Or-Cut ( G, κ, µ, ǫ ) . The input is a graph G = ( L ∪ R, E ) with | E | = m and | L | = n , a positive edge-capacity function κ , and parameters µ ∈ [1 , n ] and ǫ ∈ (0 , . In O ( m log( n ) /ǫ ) time the algorithm returns one of the following:1. A fractional matching M of size µ (1 − ǫ ) such that ∀ e ∈ E, val( e ) ≤ κ ( e ) .2. Sets S L ∈ L and S R ∈ R such that κ ( S L , R \ S R ) + | S R | ≤ µ + | S L | − n .Proof. W.l.o.g. we can assume that | L | ≤ | R | and then we treat edges in G are directed edges from L to R . Let h = 2 /ǫ and z = n − µ (1 − ǫ ). For each v ∈ L , let ∆( v ) = 1 and T ( v ) = 0. For each v ∈ R , let ∆( v ) = 0 and T ( v ) = 1. Let Π = (∆ , T, κ ). We call the algorithm from Lemma B.9 with( G, z, h, Π) as input. Observe that the input satisfies all the conditions of Lemma B.9.If Lemma B.9 returns a preflow f with excess at most z , then this means that we obtain aflow of size at least ∆( V ) − z = µ (1 − ǫ ) by Remark B.1. Obliviously, f ( e ) ≤ κ ( e ) for all e . IfLemma B.9 returns a set S ⊂ V ( G ) such that κ ( E ( S, V \ S )) ≤ ∆( S ) − T ( S ) − z + 2 · ∆( V ) − zh . Let S L = S ∩ L and S R = S ∩ R . Note that ∆( S ) = | S L | , T ( S ) = | S R | , κ ( E ( S, V \ S )) = κ ( S L , R \ S R ),and 2 ∆( V ) − zh − z = ǫµ (1 − ǫ ) − ( n − µ (1 − ǫ )) = µ (1 − ǫ ) − n. So we have κ ( S L , R \ S R ) ≤ | S L | − | S R | + µ − n as desired. B.4.4 Flow for Vertex Cuts The algorithm below is for finding sparse vertex cuts . Lemma 5.14. There is a deterministic algorithm Vertex-Congested-Matching ( G, A, B, φ, ǫ ) that, given a directed n -vertex graph G = ( V, E ) , two disjoint terminal sets A, B ⊂ V where n/ ≤ | A | ≤ | B | , φ ∈ (0 , , and ǫ ∈ (0 , , in ˜ O ( m/φ ) time, either • returns a O ( φ log n ) -vertex-sparse Ω( ǫ ) -vertex-balanced vertex cut ( L, S, R ) , or a directed (integral) matching M of size at least (1 − ǫ ) | A | from A to B such that there is anembedding P that embeds M into G with vertex congestion /φ .Proof. For notational convenience, let us rename a graph that we want to find a vertex cut as G v = ( V v , E v ) where A, B ⊂ V v . We use the standard reduction that splits each vertex into twovertices. More precisely, we construct a bipartite graph G = ( V = ( L, R ) , E ) as follows. For eachvertex v ∈ V v , we create v L ∈ L and v R ∈ R and add a directed edge ( v L , v R ) with capacity ⌊ /φ ⌋ .For each each ( u, v ), we add a directed edge ( u R , v L ) with capacity ∞ . For v ∈ A , we let ∆( v L ) = 1.Set ∆( v ) = 0 for all other vertices in G . For each v ∈ B , we let T ( v R ) = 1. Set T ( v ) = 0 for allother vertices in G . Let c : E → R denote the edge capacity of edges in G .Let Π be the integral flow problem on G defined by (∆ , T, c ). We call Proposition B.1 withparameter h = 100 log( n/φ ) φ in O ( mh log m ) and obtain a preflow f in G . Set z = ǫ | A | . There aretwo cases whether ex f ( V ) ≤ z or not.If ex f ( V ) ≤ z , then by Remark B.1, we can obtain a feasible flow ˆ f in G with value at least∆( V ) − z = (1 − ǫ ) | A | . Moreover we can obtain a path decomposition P ˆ f of ˆ f in O (∆( V ) h ) = O ( nh )time. By reading the endpoint of paths in P ˆ f , we obtain an integral matching M of size at least(1 − ǫ ) | A | from A to B , such that M can be embedded into G v with vertex congestion ⌊ /φ ⌋ . Thisis because ˆ f corresponds to a flow in G v with vertex connection at most ⌊ /φ ⌋ by the constructionof G .If ex f ( V ) > z , then by Proposition B.5, we know | V h | ≥ ∆( V h ) > z and | V | ≥ T ( V ) >z because ∀ v ∈ V, ∆( v ) , T ( v ) ≤ 1. By Lemma B.4, V h , V h − , V h − , . . . are subsets of L and V h − , V h − , V h − , . . . are subsets of R . Let c ′ be an edge capacity function where c ( e ) = c ( e )for all e ∈ E ( L, R ) and c ( e ) = 0 otherwise. Let C = P e ∈ E c ( e ) = n ⌊ /φ ⌋ . By the ball grow-ing argument applying in two directions, there is an index 0 ≤ i < h/ c ( V h − i ) ≤ min { vol c ( V ≥ h − i ) , vol c ( V 20 log Ch . Otherwise, vol c ( V ≥ h/ ) ≥ (1 + 20 log Ch ) h/ > C =vol c ( V ) which is a contradiction. Fix such i and let S = V ≥ h − i . Now, we bound the cut size of S : Claim B.10. c ( E ( S, V \ S )) ≤ ∆( S ) − T ( S ) + 2vol c ( V h − i ) . In particular, c ( E ( S, V \ S )) is finite.Proof. First, observe that the total mass from V h − i − to V h − i is at most the total capacityof outgoing edges from V h − i plus the sink capacity of of V h − i . That is, f ( V h − i − , V h − i ) ≤ c ( E ( V h − i , V )) + T ( V h − i − ) = c ( E ( V h − i , V )) where T ( V h − i ) = 0 as V h − i ⊆ L . Now, we have thefollowing: c ( E ( S, V \ S )) = c ( E ( V h − i , V h − i − )) + c ( E ( S, V \ S ) \ E ( V h − i +1 , V h − i )) ≤ c ( E ( V h − i , V h − i − )) + ∆( S ) + f ( V h − i − , V h − i ) − ab f ( S ) − ex f ( S ) < ∆( S ) − T ( S ) − z + 2 c ( E ( V h − i , V )) ≤ ∆( S ) − T ( S ) + 2vol c ( V h − i )where the first inequality is by Proposition B.6. The last inequality follows because V h − i ⊆ L andso c ( E ( V h − i , V )) = c ( E ( V h − i , R )) = c ( E ( V h − i , R )) = vol c ( V h − i ).Let ( X, Y, Z, F ) be a partition of vertices V v in G v defined as follow: If v L , v R ∈ S , then v ∈ X .If v L ∈ S and v R / ∈ S , then v ∈ Y . If v L , v R / ∈ S , then v ∈ Z . If v L / ∈ S and v R ∈ S , then v ∈ F . Claim B.11. For any U ⊆ V v , let out v ( U ) = { v / ∈ U | ∃ ( u, v ) ∈ E v where u ∈ U } and in v ( U ) = { v / ∈ U | ∃ ( v, u ) ∈ E v where u ∈ U } denote the set of out-neighbors and in-neighbors of U respectively. We have out v ( X ) ⊆ Y , in v ( Z ) ⊆ Y , out v ( F ) ⊆ X ∪ Y , and in v ( F ) ⊆ Z ∪ Y . roof. Otherwise, there exists an edge e ∈ E ( R ∩ S, L \ S ), which is impossible because c ( e ) = ∞ but c ( E ( S, V \ S )) is finite.Now, we can define a vertex cut in G v that we will output. Partition F into two halves F x , F z .Our output is ( X ∪ F x , Y, Z ∪ F z ). From Claim B.11, observe that ( X ∪ F x , Y, Z ∪ F z ) is a vertexcut in G v because E v ( X ∪ F x , Z ∪ F z ) = ∅ . As both | F x | , | F z | ≥ | F | / 3, the following claim impliesthat ( X ∪ F x , Y, Z ∪ F z ) is O ( φ )-vertex-sparse: Claim B.12. | Y | ≤ O ( φ ) · min {| X ∪ F | , | Z ∪ F |} .Proof. Recall that vol c ( V h − i ) ≤ min { vol c ( V ≥ h − i ) , vol c ( V 20 log Ch by the choice of i . Now,observe that vol c ( V ≥ h − i ) = vol c ( S ) = (2 | X | + | Y | + | F | ) ·⌊ /φ ⌋ , and vol c ( V 100 log( n/φ ) φ , so we have the following by Claim B.10: | Y | · ⌊ /φ ⌋ ≤ ( | X | + | Y | ) − ( | X | + | F | ) + min { | X | + | Y | + | F | , | Z | + | Y | + | F |} · · ⌊ /φ ⌋ · 20 log( n/φ )( 100 log( n/φ ) φ ) ≤ | Y | + min {| X | + | Y | + | F | , | Z | + | Y | + | F |} . So | Y | ( ⌊ /φ ⌋ − ≤ min {| X ∪ F | , | Z ∪ F |} and so | Y | ≤ O ( φ ) · min {| X ∪ F | , | Z ∪ F |} .It remain so show that ( X ∪ F x , Y, Z ∪ F z ) is Ω( ǫ )-vertex-balanced. Observe that | X ∪ Y ∪ F | ≥| S | > z and | Z ∪ Y ∪ F | ≥ | V \ S | > z . As we have | Y | ≤ O ( φ ) · min {| X ∪ F | , | Z ∪ F |} , so | X ∪ F x | , | Z ∪ F z | = Ω( z ) = Ω( ǫn ) by assuming that φ is smaller than a constant. C Proof of Proposition 4.1 We prove Proposition 4.1 in this section. For convenience, we restate the proposition below. Proposition 4.1 (see [Lac11, CHI + . Let G = ( V, E ) be a decremental graph. Let A be a datastructure that maintains a monotonically growing set S ⊆ V and after every adversarial updatereports any additions made to S and maintains the SCCs in G \ S explicitly in total update time T ( m, n ) and supports SCC path queries in G \ S in almost-path-length query time.Then, there exists a data structure B that maintains the SCCs of G explicitly and supportsSCC path-queries in G (in almost-path-length query time). The total update time is O ( T ( m, n ) + m | S | log n ) , where | S | refers to the final size of the set S .Proof. In order to prove the proposition, let us first define the following notion. Definition C.1. For any graph H , where the SCCs of H are the sets C , C , . . . , C k , we say thatthe condensation Cond ( H ) of the graph H is the graph of H after contracting vertices in eachSCC C i into a supervertex, i.e. the graph H/ { C , C , . . . , C k } .We then use the following claim that extends a condensation of the subgraph G \ X to acondensation of G \ ( X \ { x } ) where x ∈ X . This is the key ingredient in our data structure. Wedefer the proof to the end of the section. Claim C.2. There exists a data structure C that given a decremental graph G and an increasing set X ⊆ V , a (dynamic) condensation of the graph Cond ( G \ X ) and a vertex x ∈ X , can maintain thecondensation Cond ( G \ ( X \ { x } ) in total update time O ( m log n ) . The data structure can returna path in the condensation Cond ( G \ X ) from or to x for every vertex y in the same SCC in timelinear in the number of edges. The path is strictly contained in the SCC of x . A on G to monitor the SCCsin the graph G \ S which allows us to maintain the condensation Cond ( G \ S ).We then, arbitrarily order the vertices s , s , . . . , s k in S , and for i = 1 , . . . k , we take the vertex s i and build a data structure as described in Claim C.2 to run on the condensation Cond ( G \ ( S \{ s , s , . . . , s i − } )) for vertex s i to maintain the condensation Cond ( G \ ( S \{ s , s , . . . , s i } )). Thus,the condensation that is maintained by the data structure at the final vertex s k is the condensationof G that has a supernode for every SCC with the same underlying vertex set.To maintain this data structure, we pass edge deletions to G , to the data structures at thevertices in S in their order which allows updates to percolate up and to enforce that the final datastructure again maintains the condensation of G .Whenever a vertex y is added to S by A , we prepend y to the vertices s , s , . . . , s t , build a newdata structure as described in Claim C.2 from y on the condensation G \ S and is now responsibleto maintain the condensation of G \ ( S \ { y } ) and to communicate changes to s . It is not hard tosee that s thus runs on the condensation of the same underlying graph as before.The total update time is dominated by the time to maintain the condensation at each vertex s ∈ S . Since each such data structure runs in total update time O ( m log n ) and since we only runa single instance of A , we derive total update time O ( T ( m, n ) + | S | m log n ), as desired.To compute a path between any two vertices x, y in the same SCC in G , we can locate straight-forwardly the condensation where they are first contained in the same supernode (for example byusing a least-common ancestor data structure). If this condensation was derived by data structure A , we directly query A . Otherwise, there is some vertex s i ∈ S associated with the condensation andwe can query its data structure. Whilst this only returns a path in Cond ( G \ ( S \{ s , s , . . . , s i − } ))by Claim C.2, we can then check each the returned path and if two endpoints at the same supernodediffer, we can recursively find a path between these endpoints. Since we find the paths strictly inthe induced SCCs on lower levels, we have that no endpoint on the final path is visited more thanonce, thus we can return a simple path between the vertices x, y in time almost-linear in the numberof edges.Finally, we prove Claim C.2. Claim C.2. There exists a data structure C that given a decremental graph G and an increasing set X ⊆ V , a (dynamic) condensation of the graph Cond ( G \ X ) and a vertex x ∈ X , can maintain thecondensation Cond ( G \ ( X \ { x } ) in total update time O ( m log n ) . The data structure can returna path in the condensation Cond ( G \ X ) from or to x for every vertex y in the same SCC in timelinear in the number of edges. The path is strictly contained in the SCC of x .Proof. Given Cond ( G \ X ) of a decremental graph G \ X for a set X ⊆ V , and a vertex x ∈ X .Then, for every vertex v ∈ V \ X we monitor the in-degree of v in the graph H ′ initialized to Cond ( G \ X ) ∪ E ( x, V \ X ) and if the in-degree of one such vertex drops to 0, we remove v and its out-going edges from H ′ . This might cause additional vertices to have their in-degreedrop to 0. Similarly, we monitor for every vertex v the out-degree in the graph H ′′ initialized to Cond ( G \ X ) ∪ E ( V \ X, x ) and remove v and its out-going edges from H ′′ once a vertex has nolonger any in-coming edges. If a vertex y is added to X throughout the algorithm, then we simplyremove y with all incident edges from H ′ , H ′′ and H ′′′ .The condensation Cond ( G \ ( X \ { x } )) is then derived by contracting all vertices in the con-densation that have non-zero in- and out-degree in H ′ and H ′′ together with vertex x into a newSCC supervertex.To see that this correctly maintains Cond ( G \ ( X \ { x } )), observe that the graph Cond ( G \ X )is a DAG and therefore every SCC in the graph H ′′′ = Cond ( G \ X ) ∪ E ( x, V \ X ) ∪ E ( V \ X, x )73as to contain x , since every cycle has to go through x . Further, it is not hard to establish byinduction that a vertex v is only removed from H ′ if and only if there is no path from x to v in H ′′′ and similarly v is removed only from H ′′ iff there is no path from v to x in H ′′′ . Thus, v remainsin the graphs H ′ and H ′′ if and only if it is strongly-connected to x . This establishes correctness.To obtain an upper bound on the running time of O ( m log n ) observe that H ′′′ is a multigraphwhere vertices slowly decompose since G is decremental and therefore the underlying condensation Cond ( G \ X ) has an increasing supervertex set. However, every time a supervertex is split intomultiple vertices, the operation can be done in time linear in the number of edges incident to thenew supernodes that only contain at most half the number of vertices than the previous supernodethat they were part of. Copying these edges can thus by done in O ( m log n ) time since an edge iscopied to a new supervertex when the vertex halves in size which happens at most O (log n ) times.Further after every edge deletion to G , we have to check the in and out-degree of the supernodesin which the endpoints are contained in and every edge might be deleted at some point. But thiscan be implemented straight-forwardly in at most O ( m ) total update time which is subsumed inthe total update time of O ( m log n ).To return a path from x to any other vertex y in the same SCC as x in Cond ( G \ X ), we canmaintain a dynamic tree where we add an in-edge from every vertex rooted at x that is still in H ′ .It is not hard to see that this dynamic tree is indeed a spanning tree since the graph H ′ is a DAG.Thus, the root to y path is a path in Cond ( G \ X ) that can be extracted in time linear in thenumber of edges. Since every edge might be added once to the dynamic tree at some stage until itis deleted in G , the total number of insertions and deletions to the dynamic tree is at most O ( m ).Since a dynamic tree can be implemented with O (log n ) operations for insertions and deletions, thetotal running time is again subsumed by O ( m log n ). D Proof of Theorem 4.4 For the sake of convenience, we restate the theorem proved in this section. Theorem 4.4. There is a data structure Forest-From-Witness ( G, W, φ ) that takes as inputan n -vertex m -edge graph G = ( V, E ) , a set W ⊆ V with | W | ≥ | V | / and a parameter φ > .The algorithm must process two kinds of updates. The first deletes any edge e from E ; the secondremoves a vertex from W (but the vertex remains in V ), while always obeying the promise that | W | ≥ | V | / . The data structure must maintain a forest of trees F out such that every tree T ∈ F out has the following properties: all edges of T are in E ( G ) ; T is rooted at a vertex of W ; every edgein T is directed away from the root; and T has depth b O (1 /φ ) . The data structure also maintains aforest F in with the same properties, except each edge in T is directed towards the root.At any time, the data structure may perform the following operation: it finds a b O ( φ ) -sparsevertex cut ( L, S, R ) with W ∩ ( L ∪ S ) = ∅ and replace G with G [ R ] . (This operation is NOTan adversarial update, but is rather the responsibility of the data structure.) The data structuremaintains the invariant that every v ∈ V is present in exactly one tree from F out and exactly onefrom F in ; given any v , the data structure can report the roots of these trees in O (log( n )) time.(Note that as V may shrink over time, this property only needs to hold for vertex v in the current set V .) The total time spent processing updates and performing sparse-cut operations is b O ( m/φ ) .Proof. To implement the data structure Robust-Witness ( G, φ ), we use the following data struc-ture internally. Theorem D.1 (ES-tree, see [ES81, HK99]) . Given a directed decremental graph G = ( V, E ) , afixed vertex s ∈ V , and a depth threshold δ ≥ . There exists a deterministic data structure that aintains explicitly the shortest path tree from s in G truncated at distance δ (that is the shortestpath tree in the graph induced by vertices at distance at most δ from s ), in total time O ( mδ ) . Instead of running it on G directly, we introduce a new graph G s that is initialized to G andan additional node s along with an edge to and from s to every vertex w in W (i.e. there are theanti-parallel edges ( s, w ) and ( w, s ) in G s ). Throughout the algorithm, we update G s with edge andvertex deletions (i.e. G s is a decremental graph), such that G s [ V ] remains at all stages a subgraphof G .Throughout, we run an ES-tree E from s on G s to depth ⌈ /φ ⌉ + 1 and an ES-tree E (rev) from s on G (rev) s to depth ⌈ /φ ⌉ + 1. We let the corresponding truncated shortest-path trees be denotedby T and T (rev) .Now, to update G s , we pass edge deletions to G directly to G s and whenever a vertex w isdeleted from the set W , we remove the edges ( s, w ) and ( w, s ) from G s . Additionally, whenevera vertex r ∈ V , is no longer present in the tree T or T (rev) , we run a separator procedure, thatprunes out a part of the graph containing r using a vertex-sparse separator. A static procedure tocompute such a separator is stated below. Lemma D.2 (Balanced Separator, see Lemma 6.1 in [BPWN19]) . Given a graph G = ( V, E ) , avertex r ∈ V and d a positive integer such that the ball B G s ( r, d ) = { v ∈ V | dist G s ( r, v ) ≤ d } contains at most n/ vertices. Then, there exists a deterministic algorithm that outputs two disjointvertex sets S Sep , V Sep ⊆ V with r ∈ V Sep such that1. ∀ v ∈ V Sep ∪ S sep , we have dist G ( r, v ) ≤ d ,2. the cut ( V Sep , S Sep , V \ ( V Sep ∪ S Sep ) is a b O (1 /d ) -vertex-sparse cut.The running time of the procedure is bounded by O ( | E ( V Sep ) | ) . Given this separator procedure, whenever a vertex r is removed from a tree T by data structure E , we have that its distance from s exceeds ⌈ /φ ⌉ + 1 and since every vertex in W is at distance 1from s , we have that the distance from r to any vertex w in W is at least ⌈ /φ ⌉ + 1. We then invokethe separator procedure from Lemma D.2 on G s from r with depth parameter d = ⌈ /φ ⌉ . Sincethis ensures that no vertex in W is in B G s ( r, d ) = { v ∈ V | dist G s ( r, v ) ≤ d } , the ball containsat most | V ( G s ) \ W | ≤ n/ S Sep , V Sep such that ( V Sep , S Sep , V \ ( V Sep ∪ S Sep ) is a b O ( φ )-vertex-sparse cut which we output(to efficiently output, we only write V Sep and S Sep ) and then remove the vertices S Sep ∪ V Sep withall incident edges from G s which leaves the graph G [ V \ ( V Sep ∪ S Sep ] as specified by the theorem.Since this only removes vertices not in W , this satisfies the requirement of the theorem regardingthe subgraph that is worked upon.Analogously, whenever a vertex r is removed from a tree T (rev) by data structure E (rev) , we finda separator using the procedure from Lemma D.2 on graph G (rev) s from r to depth d = ⌈ /φ ⌉ . Thesame line of reasoning applies regarding the soundness of parameters.Finally, let us describe how to maintain the forest F out (the maintenance of F in is analogous).Therefore, we observe that since the shortest path tree T is maintained explicitly by the ES-treealgorithms, we have at most b O ( m/φ ) edge changes to the trees. We can thus maintain F out toconsist of the edges of the shortest-path tree T without the vertex s and incident edges to s in b O ( m/φ ) time. Clearly, each such tree T ∈ F out is rooted at a vertex w ∈ W since s only has edgesto vertices W . Further, it is clear that F out spans exactly the vertices in V ( G s ) \{ s } = V ( G ). Usinga dynamic cut-link tree data structure to implement the trees, we can further straight-forwardly75nswer queries for every vertex u ∈ V ( G ), on which vertex w in W is the root of its tree in time O (log n ). This completes the proof. E Short-path Oracles on Expanders In this section, we prove the following theorem. Theorem 4.5. There is a deterministic data structure Path-Inside-Expander ( W ) that takes asinput an n -vertex m -edge /n o (1) -expander W subject to decremental updates. Each update candelete an arbitrary batch of vertices and edges from W , but must obey the promise that the resultinggraph remains a φ -expander. Given any query u, v ∈ V ( W ) , the algorithm returns in n o (1) timea directed simple path P uv from u to v and a directed simple path P vu of v to u , both of length atmost n o (1) . The total update time of the data structure is b O ( m ) . The idea from this section is completely identical to the analogous subroutine for undirectedgraphs by Chuzhoy and Saranurak [CS20]. In particular, Appendix E.2 is copied from that pa-per with small changes to make it work in directed graphs. As we only translate their ideas toour setting, and plug in our primitives for directed expanders instead of using the primitives forundirected expanders, we do not claim any contribution in this part. E.1 Embedding A Small Witness First, we define a variant of the witness from Definition 3.5 using edge congestion instead of vertexcongestion. As we will never benefit from allowing the witness W to be a weighted graphs as weneed in Section 5 and allow some vertex to have high (unweighted) degree, we will restrict ourwitness in this section to be an unweighted graph with small maximum degree. Moreover, werequite the embedding of the witness to be short (as this is the point of this section). Definition E.1 (Witness with Edge Congestion) . We say that W is a φ -edge-witness of G if V ( W ) ⊆ V ( G ), W is a unweighted b Ω(1)-(edge)-expander with maximum degree O (log | V ( W ) | ),and there is an embedding that embeds W into G with edge-congestion 1 /φ and length ˜ O (1 /φ ).We will show an algorithm for finding a Ω( φ )-edge-witness W on φ -expander G . In our ap-plication, W will be “small” in the sense that | V ( W ) | ≪ | V ( G ) | . To do find a witness, we againemploy a cut-matching game from Theorem 7.1. The lemma below is needed as an algorithm forthe matching player: Lemma E.2 (Matching Embedder on Expanders) . There is an algorithm TerminalMatching ( G, A, B, φ ) with following inputs: a parameter φ ∈ (0 , , a directed unweighted φ -expander G = ( V, E ) with n vertices and m edges, and terminal sets A, B ⊂ V where | A | = | B | . In ˜ O ( m/φ ) time, the algorithmreturns a perfect (integral) matching M from A to B and an embedding P that embeds M into G with edge-congestion O (log( n ) /φ ) and length O (log( n ) /φ ) .Proof. We first define a flow problem Π = (∆ , T, c ) on G as follows. For all v ∈ A, ∆( v ) = 1,otherwise ∆( v ) = 0. For all v ∈ B, T ( v ) = 1, otherwise T ( v ) = 0. Let c ( e ) = 2 /φ for all e ∈ E .Let C = P e ∈ E c ( e ). Let z = 0 and h = 40 log Cφ . Now, we call Lemma B.8 with ( G, z, h, Π) as inputin time O ( mh log m ) = ˜ O ( m/φ ). 76e claim that the algorithm cannot return a cut S . Otherwise, there is S where c ( E ( S, V \ S )) = ∆( S ) − T ( S ) − z + min { vol c ( S ) , vol c ( V \ S ) } · 10 log Ch . Note that ∆( S ) − T ( S ) = T ( V \ S ) − ∆( V \ S ). As ∆( S ) ≤ | S | ≤ vol( S ) and T ( V \ S ) ≤ | V \ S | ≤ vol( V \ S ), we have ∆( S ) − T ( S ) ≤ min { vol( S ) , vol( V \ S ) } . Also, note thatmin { vol c ( S ) , vol c ( V \ S ) }· 10 log Ch = min { vol( S ) , vol( V \ S ) }· φ · 10 log Ch = min { vol( S ) , vol( V \ S ) } / h . So, we have | E ( S, V \ S ) | = φ · c ( E ( S, V \ S )) ≤ φ · (min { vol( S ) , vol( V \ S ) } + min { vol( S ) , vol( V \ S ) } / < φ min { vol( S ) , vol( V \ S ) } which contradicts the fact that G is a φ -expander.By Remark B.1, so we obtain a feasible flow f and its path decomposition P f in time O (∆( V ) h ) = O ( m/φ ). Let P sf contains all path in P f whose length is at most 2 h . As P P ∈P f val( P ) | P | ≤ ∆( V ) h from Lemma B.2, |P sf | ≥ |P f | / | A | / 2. By reading the endpoints of paths in P sf , we obtain anintegral matching ˆ M from ˆ A ⊆ A to ˆ B ⊆ B of size at least | A | / G with congestion 2 /φ .As we want a perfect matching, we set A ← A \ ˆ A , B ← B \ ˆ B , M ← M ∪ ˆ M . Then repeat theprocess log m time. At the end, we obtain an integral perfect matching M from A to B that can beembedded into G with 2 log( m ) /φ edge congestion and 2 h length. We also obtain its correspondingembedding.Now, we are ready to apply the cut-matching game for finding a small ˜Ω( φ )-edge-witness in a φ -expander. Lemma E.3 (Witness Embedder on Expanders) . There is an algorithm TerminalWitness ( G, T, φ ) with the following parameters: a parameter φ ∈ (0 , , a directed unweighted φ -expander G = ( V, E ) with n vertices and m edges, and terminal sets T ⊂ V . In b O ( m/φ ) time, the algorithm finds a Ω( φ/ log ( n )) -edge-witness W in G where V ( W ) = T and its corresponding embedding P . Let α wit = 1 /n o (1) such that W is a α wit -expander and the running time is at most O ( m/ ( α wit φ )) (wewill use this parameter in other lemmas).Proof. We perform a cut matching game from Theorem 7.1 for building an expander W on T .Starting from round i = 1 of the game, Theorem 7.1 gives us A i , B i ⊂ T where | A i | = | B i | ≥| T | / 4. Then, we call TerminalMatching ( G, A i , B i , φ ) and TerminalMatching ( G, B i , A i , φ ) toobtain integral directed matchings −→ M i and ←− M i that matches A i to B i and back. We set W ← W ∪ −→ M i ∪ ←− M i and proceed with round i + 1.After O (log | T | ) rounds, W is an unweighted b Ω(1)-expander with maximum degree O (log | T | ).As −→ M i and ←− M i can be embedded into G with O (log( n ) /φ ) edge congestion and length, W canbe embed into G with O (log ( n ) /φ ) edge congestion, and O (log( n ) /φ ) length. Therefore, W is aΩ( φ/ log ( n ))-edge-witness where V ( W ) = T . Note that, we also explicitly have the embedding of W . The total running time is ˜ O ( m/φ ) + b O ( | T | ) = b O ( m/φ ) by Lemma E.2 and Theorem 7.1.77 .2 A Recursive Scheme Now, we are ready to prove Theorem 4.5. Let α wit = 1 /n o (1) be the conductance bound fromLemma E.3. For any L ≥ 1, let γ L ( φ ) = φ O ( L ) be the conductance bound from Theorem 6.1.Below, we say that a vertex set S is incremental if vertices in S can never leave S as time progresses. Theorem E.4. For any number q ≥ and L ≥ where L = q , there is a deterministic algorithmthat, given a m -edge n -vertex α wit -expander G undergoing a sequence of edge deletions of length γ L ( α wit )vol( G ) /n /L , maintains an incremental vertex set P using O ( m /q /γ O ( q ) L ( α wit )) totalupdate time such that • vol G (0) ( P ) = O ( tn /L /γ L ( α wit )) after the t -th deletion where G (0) denotes G before any dele-tion, and • given u, v ∈ V ( G ) − P , returns a u - v simple path Q in G [ V ( G (0) ) − P ] of length /γ O ( q ) L ( α wit ) in time /γ O ( q ) L ( α wit ) . Proof of Theorem 4.5 from Theorem E.4. Let q = q c log log /α wit ( n ) = ω (1). Observethat 1 /γ O ( q ) L ( α wit ) = n o (1) . This is because 1 /γ O ( q ) L ( α wit ) = 1 / ( α O ( L ) wit ) O ( q ) = (1 /α wit ) cq for someconstant c . So we have 3 cq = 3 log log /α wit ( n ) = (log /α wit ( n )) / and so(1 /α wit ) cq = (1 /α wit ) (log /α wit ( n )) / = n log n (1 /α wit ) · (log /α wit ( n )) / = n / (log /α wit ( n )) / = n o (1) . So the total update time of Theorem E.4 is b O ( m ) and the query time is n o (1) . This impliesTheorem 4.5. Proof of Theorem E.4. The algorithm has q levels. For each 1 ≤ i ≤ q , we describe theimplementation of GrowTree ( i ), Delete ( i, e ), and Query ( i, u, v ) in Algorithm 1, Algorithm 2, andAlgorithm 3 respectively. The algorithm is recursive. Recall that we are given an input α wit -expander G with m initial edges and n vertices. We let m , n , and α wit be global variables that donot change when we recurse.Now, we describe how we call each subroutine given an input G and an update sequence.We initialize G (0) q = G and call GrowTree ( q ). We initialize the expander pruning algorithm fromTheorem 6.1 and maintain the set P q ⊆ V ( G (0) q ). Whenever an edge e is deleted from G (0) q , we call Delete ( q, e ) and update the set P q using Theorem 6.1. Recall that P q only grows. Let G ( d ) q denote G (0) q after d edge deletions. As d increases, we maintain G q = G ( d ) q [ V ( G (0) q ) − P q ]. That is, G q isobtained from G (0) q after deleting all edges deleted by the adversary and deleting all vertices in P q .By Theorem 6.1, G q is always a γ L ( α wit )-expander and P q has volume at most O ( dn /L γ L ( α wit ) ) after d deletions. We let P = P q be the output set of the algorithm for Theorem E.4. This satisfies thefirst guarantee of the output of Theorem E.4.Given a query u, v ∈ V ( G ) − P , we can return a u - v simple path in G [ V ( G (0) ) − P ] of length1 /γ O ( q ) L ( α wit ) in time 1 /γ O ( q ) L ( α wit ) by doing the following. First, we call Query ( q, u, v ) and returna u - v path Q ′ of length 1 /γ O ( q ) L ( α wit ) in O ( | Q ′ | ) time (will be proved in Lemma E.6). However, Q ′ might not be simple. So we extract a simple u - v path Q from Q ′ in time | Q ′ | ≤ /γ O ( q ) L ( α wit ). Thissatisfies the second guarantee of the output of Theorem E.4.78 lgorithm 1 GrowTree ( i ) Assert: G i is a γ L ( α wit )-expander.1. If i = 1, compute a shortest path tree T rooted at an arbitrary vertex. Then, return.2. Build a subdivided graph G ′ i obtained from G i by subdividing each edge e = ( u, v ) ∈ E ( G ) isinto ( u, x e ) and ( x e , v ).3. Set F i − to be an arbitrary set of edges in G i of size m ( i − /q . Let X F i − = { x e ∈ V ( G ′ i ) | e ∈ F i − } .4. Using Lemma E.3, compute a Ω( γ L ( α wit ) / log n )-witness G (0) i − in G ′ i where V ( G (0) i − ) = X F i − and G (0) i − is a α wit -expander. Let P i − be an embedding of G (0) i − .5. Initialize the expander pruning algorithm from Theorem 6.1 on G (0) i − and maintain P i − ⊆ V ( G (0) i − ).6. Let G ( d ) i − denote G (0) i − after d edge deletions. As d increases, maintain G i − = G ( d ) i − [ V ( G (0) i − ) − P i − ]. By Theorem 6.1, G i − is always a γ L ( α wit )-expander.7. Initialize two ES-trees T ini and T outi in G ′ i rooted at V ( G i − ) of depth O (log( n ) /γ L ( α wit )).Edges of T ini and T outi are directed inwards and outwards V ( G i − ) respectively.8. Call GrowTree ( i − Algorithm 2 Delete ( i, e ) where e ∈ E ( G i )1. If i = 1, delete e in G . Recompute a shortest path tree T in G . Then, return.2. Delete e from G i . Update the vertex set P i using Theorem 6.1.3. Let D newi denote the set of edges that are just removed from G i . That is, D newi contains e and all edges incident to vertices that are newly added into P i .4. For each e ∈ D newi ,(a) Let P ( e ) i − be a set of paths from the embedding P i − of G i − that contains e . Let D ( e ) i − be the set of edges in E ( G i − ) corresponds to P ( e ) i − .(b) Delete ( i − , e ′ ) for each e ′ ∈ D ( e ) i − .5. Whenever there are more than d i − = γ L ( α wit )vol( G (0) i − ) /n /L deletions to G (0) i − , call GrowTree ( i ). 79 lgorithm 3 Query ( i, u, v ) where u, v ∈ V ( G i )1. If i = 1, return a u - v path by traversing T in and T out .2. Let Q uu ′ be the path in T ini from u to u ′ ∈ V ( G i − ). Let Q v ′ v be the path in T outi from v ′ ∈ V ( G i − ) to v .3. Let R u ′ v ′ = Query ( i − , u ′ , v ′ ) be the returned u ′ - v ′ path in G i − .4. Let Q u ′ v ′ be obtained by concatenating, over all e ′ ∈ R u ′ v ′ , the corresponding paths from theembedding P i − of G i − .5. Return the concatenation Q uv = Q uu ′ ◦ Q u ′ v ′ ◦ Q v ′ v as a path in G i . (Note that Q uu ′ , Q u ′ v ′ , Q v ′ v are, strictly speaking, paths in G ′ i .)It remains to bound the total update time in Lemma E.5 and prove the guarantee about Query ( q, u, v ) in Lemma E.6. Lemma E.5. The total update time is O ( m /q /γ O ( q ) L ( α wit )) .Proof. Let Time( i ) be the total update time that the data structure at level i takes for handling d i = γ L ( α wit )vol( G (0) i ) /n /L edge deletions in G (0) i . So Time( q ) is the total update time of ouralgorithm. For each level i , throughout d i edge deletions in G (0) i , the total volume of edges prunedout by Theorem 6.1 is O ( d i n /L /γ L ( α wit )) ≤ vol( G (0) i ) / d i by some constant). As theembedding of G i − in G i has congestion at most cong = ˜ O (1 /γ L ( α wit )), this corresponds to atmost cong · vol( G (0) i ) edge deletions to G i − . As we call GrowTree ( i ) only when there are more than d i − = γ L ( α wit )vol( G (0) i − ) /n /L deletions to G (0) i − , the number of calls to GrowTree ( i ) throughout d i deletions is at most cong · vol( G (0) i ) γ L ( α wit )vol( G (0) i − ) /n /L = ˜ O ( m /q n /L /γ L ( α wit )) . where we use the fact that | V ( G i ) | = m i/q and vol( G i ) = ˜ O ( | V ( G i ) | ) by Lemma E.3.Consider the total work for executing GrowTree ( i ) and maintaining the data structure untilright before the next call of GrowTree ( i ). We divide the work into two parts. First, the work forexecuting GrowTree ( i ) itself (which embeds G (0) i − into G i ). Second, the work for maintaining thedata structure at level i − d i − deletions to G (0) i − . The second part takes at mostTime( i − 1) by definition.Now, we analyze the first part, the work for executing GrowTree ( i ). Consider Algorithm 1.Embedding G i − into G i takes time O (vol( G i ) / ( α wit · γ L ( α wit )) by Lemma E.3. Theorem 6.1 takes˜ O ( vol( G i ) n /L γ L ( α wit ) ). The total time for maintaining the ES-tree T i is also ˜ O (vol( G i ) /γ L ( α wit )). So eachcall to GrowTree ( i ) takes at most ˜ O ( m i/q n /L /γ L ( α wit )) time. Therefore, we haveTime( i ) = (cid:16) ˜ O ( m i/q n /L /γ L ( α wit )) + Time( i − (cid:17) × ˜ O ( m /q n /L /γ L ( α wit )) . Solving this recursion, we have Time( i ) = O ( m ( i +1) /q n i/L log O ( i ) ( m ) /γ iL ( α wit )). SoTime( q ) = O ( m /q n q/L log O ( q ) ( m ) /γ O ( q ) L ( α wit ))= O ( m /q /γ O ( q ) L ( α wit ))80ecause L = q and γ L ( α wit ) ≪ / log O (1) m as desired. Lemma E.6. Given any pair of vertices u, v ∈ V ( G ) − P , Query ( q, u, v ) returns a u - v (possiblynon-simple) path Q of length log O ( q ) n in O ( | Q | ) time.Proof. Let Len( i ) be the maximum length of the path in G i returned by Query ( i, u, v ). As G i isalways a γ L ( α wit )-expander by Theorem 6.1, we have that the diameter of G i is O (log( n ) /γ L ( α wit )).So T ini and T outi span G ′ i . Consider Algorithm 3. Let Q uu ′ be the path in G ′ i from u to u ′ ∈ V ( G i − )and let Q v ′ v the path in G ′ i from v ′ ∈ V ( G i − ) to v . As T ini and T outi span G ′ i , Q uu ′ and Q v ′ v doexist. Let R u ′ v ′ = Query ( i − , u ′ , v ′ ) where | R u ′ v ′ | ≤ Len( i − Q u ′ v ′ be obtained byconcatenating, over all e ′ ∈ R u ′ v ′ , the corresponding paths from the embedding P i − of G i − . Wehave | Q u ′ v ′ | ≤ ℓ · | R u ′ v ′ | . It is clear that the concatenation Q uu ′ ◦ Q u ′ v ′ ◦ Q v ′ v is indeed a u - v pathin G ′ i and hence in G i . The length of this path is at mostLen( i ) = O (log( n ) /γ L ( α wit )) + ˜ O (1 /γ L ( α wit )) · Len( i − . Solving the recursion gives us Len( i ) = 1 /γ O ( i ) L ( α wit ). So Query ( q, u, v ) returns a (possibly non-simple) u - v path of length 1 /γ O ( q ) L ( α witwit