Deterministic Algorithms for Decremental Shortest Paths via Layered Core Decomposition
aa r X i v : . [ c s . D S ] S e p Deterministic Algorithms for Decremental Shortest Pathsvia Layered Core Decomposition
Julia Chuzhoy ∗ Thatchaphol Saranurak † Abstract
In the decremental single-source shortest paths (SSSP) problem, the input is an undirectedgraph G = ( V, E ) with n vertices and m edges undergoing edge deletions, together with a fixedsource vertex s ∈ V . The goal is to maintain a data structure that supports shortest-path queries :given a vertex v ∈ V , quickly return an (approximate) shortest path from s to v . The decrementalall-pairs shortest paths (APSP) problem is defined similarly, but now the shortest-path queries areallowed between any pair of vertices of V .Both problems have been studied extensively since the 80’s, and algorithms with near-optimaltotal update time and query time have been discovered for them. Unfortunately, all these algorithmsare randomized and, more importantly, they need to assume an oblivious adversary – a drawbackthat prevents them from being used as subroutines in several known algorithms for classical staticproblems. In this paper, we provide new deterministic algorithms for both problems, which, bydefinition, can handle an adaptive adversary.Our first result is a deterministic algorithm for the decremental SSSP problem on weighted graphs with O ( n o (1) ) total update time, that supports (1 + ǫ )-approximate shortest-path queries,with query time O ( | P | · n o (1) ), where P is the returned path. This is the first (1 + ǫ )-approximationadaptive-update algorithm supporting shortest-path queries in time below O ( n ), that breaks the O ( mn ) total update time bound of the classical algorithm of Even and Shiloah from 1981. Previ-ously, Bernstein and Chechik [STOC’16, ICALP’17] provided a ˜ O ( n )-time deterministic algorithmthat supports approximate distance queries, but unfortunately the algorithm cannot return theapproximate shortest paths. Chuzhoy and Khanna [STOC’19] showed an O ( n o (1) )-time random-ized algorithm for SSSP that supports approximate shortest-path queries in the adaptive adversaryregime, but their algorithm only works in the restricted setting where only vertex deletions, andnot edge deletions are allowed, and it requires Ω( n ) time to respond to shortest-path queries.Our second result is a deterministic algorithm for the decremental APSP problem on unweightedgraphs that achieves total update time O ( n . δ ), for any constant δ >
0, supports approximatedistance queries in O (log log n ) time, and supports approximate shortest-path queries in time O ( | E ( P ) | · n o (1) ), where P is the returned path; the algorithm achieves an O (1)-multiplicativeand n o (1) -additive approximation on the path length. All previous algorithms for APSP eitherassume an oblivious adversary or have an Ω( n ) total update time when m = Ω( n ), even if an o ( n )-multiplicative approximation is allowed.To obtain both our results, we improve and generalize the layered core decomposition datastructure introduced by Chuzhoy and Khanna to be nearly optimal in terms of various parameters,and introduce a new generic approach of rooting Even-Shiloach trees at expander sub-graphs ofthe given graph. We believe both these technical tools to be interesting in their own right andanticipate them to be useful for designing future dynamic algorithms that work against an adaptiveadversary. ∗ Toyota Technological Institute at Chicago. Email: [email protected] . Part of the work was done while the authorwas a Weston visiting professor at the Department of Computer Science and Applied Mathematics, Weizmann Institute.Supported in part by NSF grant CCF-1616584. † Toyota Technological Institute at Chicago. Email: [email protected] . ontents Short-Core-Path
Queries . . . . . . . . . . . . . . . 203.6 Maintaining the Structure of the Sublayers . . . . . . . . . . . . . . . . . . . . . . . . 223.7 Bounding the Number of Phases and the Number of Cores . . . . . . . . . . . . . . . . 233.8 Bounding the Number of Moves into the Buffer Sublayers: Proof of Lemma 3.17 . . . 233.9 Existence of Short Paths to the Cores . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.10 The Incident-Edge Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.11 Total Update Time, and Data Structures to Support
Short-Core-Path and
To-Core-Path
Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.12 Supporting
Short-Path
Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
A Proofs Omitted from Section 2 46
A.1 Proof of Observation 2.3: Degree Pruning . . . . . . . . . . . . . . . . . . . . . . . . . 46
B Proofs Omitted from Section 3 47
B.1 Proof of Observation 3.3: Bounding Number of Edges Incident to Layers . . . . . . . . 47B.2 Existence of Expanding Core Decomposition . . . . . . . . . . . . . . . . . . . . . . . . 47B.3 Proof of Theorem 3.6: Strong Expander Decomposition . . . . . . . . . . . . . . . . . 49B.4 Proof of Theorem 3.8: Embedding Small Expanders . . . . . . . . . . . . . . . . . . . 50
C Application: Maximum Bounded-Cost Flow 54
C.1 A Multiplicative Weight Update-Based Flow Algorithm . . . . . . . . . . . . . . . . . 56C.2 Efficient Implementation Using Decremental
SSSP . . . . . . . . . . . . . . . . . . . . 59
Additional Applications 63
D.1 Concurrent k -commodity Bounded-Cost Flow . . . . . . . . . . . . . . . . . . . . . . 63D.2 Maximum k -commodity Bounded-Cost Flow . . . . . . . . . . . . . . . . . . . . . . . . 64D.3 Most-Balanced Sparsest Vertex Cut . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65D.4 Treewidth and Tree Decompositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 E Tables 67
Introduction
In the decremental single-source shortest path (
SSSP ) problem, the input is an undirected graph G = ( V, E ) with n vertices and m edges undergoing edge deletions, together with a fixed source vertex s ∈ V . The goal is to maintain a data structure that supports shortest-path queries: given a vertex v ∈ V , quickly return an (approximate) shortest path from s to v . We also consider distance queries:given a vertex v ∈ V , return an approximate distance from s to v . The decremental all-pairs shortestpath ( APSP ) problem is defined similarly, but now the shortest-path and distance queries are allowedbetween any pair u, v ∈ V of vertices. A trivial algorithm for both problems is to simply maintainthe current graph G , and, given a query between a pair u, v of vertices, run a BFS from one of thesevertices, to report the shortest path between v and u in time O ( m ). Our goal therefore is to designan algorithm whose query time – the time required to respond to a query – is significantly lower thanthis trivial O ( m ) bound, while keeping the total update time – the time needed for maintaining thedata structure over the entire sequence of updates, including the initialization — as small as possible.Observe that the best query time for shortest-path queries one can hope for is O ( | E ( P ) | ), where P isthe returned path .Both SSSP and
APSP are among the most well-studied dynamic graph problems. While almost optimalalgorithms are known for both of them, all such algorithms are randomized and, more importantly,they assume an oblivious adversary . In other words, the sequence of edge deletions must be fixed inadvance and cannot depend on the algorithm’s responses to queries. Much of the recent work in thearea of dynamic graphs has focused on developing so-called adaptive-update algorithms , that do notassume an oblivious adversary (see e.g. [NS17, WN17, NSW17, CGL +
19] for dynamic connectivity,[BHI15, BHN16, BK19, Waj20] for dynamic matching, and [BC16, BC17, FHN16, Ber17, CK19,GWN20, BvdBG +
20] for dynamic shortest paths); we also say that such algorithms work against an adaptive adversary . One of the motivating reasons to consider adaptive-update algorithms is thatseveral algorithms for classical static problems need to use, as subroutines, dynamic graph algorithmsthat can handle adaptive adversaries (see e.g. [ST83, Mad10, CK19, CQ17]). In this paper, we providenew deterministic algorithms for both
SSSP and
APSP which, by definition, can handle adaptiveadversary.Throughout this paper, we use the ˜ O notation to hide poly log n factors, and b O notation to hide n o (1) factors, where n is the number of vertices in the input graph. We also assume that ǫ > SSSP.
The current understanding of decremental
SSSP in the oblivious-adversary setting is almostcomplete, even for weighted graphs. Forster, Henzinger, and Nanongkai [FHN14a], improving uponthe previous work of Bernstein and Roditty [BR11] and Forster et al. [FHN14b], provided a (1 + ǫ )-approximation algorithm, with close to the best possible total update time of b O ( m log L ), where L is the ratio of largest to smallest edge length. The query time of the algorithm is also near optimal:approximate distance queries can be processed in ˜ O (1) time, and approximate shortest-path queriesin ˜ O ( | E ( P ) | ) time, where P is the returned path. Due to known conditional lower bounds of b Ω( mn )on the total update time for the exact version of SSSP the guarantees provided by this algorithm areclose to the best possible. Unfortunately, all these algorithms are randomized and need to assume an Even though in extreme cases, where the graph is very sparse and the path P is very long, O ( | E ( P ) | ) query timemay be comparable to O ( m ), for brevity, we will say that O ( | E ( P ) | ) query time is below the O ( m ) barrier, as is typicallythe case. For similar reasons, we will say that O ( | E ( P ) | ) query time is below O ( n ) query time. The bounds assume the Boolean Matrix Multiplication (BMM) conjecture [DHZ00, RZ11] or the Online Matrix-vector Multiplication (OMv) conjecture [FHNS15], and show that in order to achieve O ( n − ǫ ) query time, for anyconstant ǫ >
0, the total update time of Ω( n − o (1) ) is required in graphs with m = Θ( n ). ES-Tree throughout this paper, combined with thestandard weight rounding technique (e.g. [Zwi98, Ber16]) gives a (1 + ǫ )-approximate deterministicalgorithm for SSSP with ˜ O ( mn log L ) total update time and near-optimal query time. This boundwas first improved by Bernstein [Ber17], generalizing a similar result of [BC16] for unweighted graphs,to ˜ O ( n log L ) total update time. For the setting of sparse unweighted graphs, Bernstein and Chechik[BC17] designed an algorithm with total update time ˜ O ( n / √ m ) ≤ ˜ O ( mn / ), and Gutenberg andWulff-Nielsen [GWN20] showed an algorithm with b O ( m √ n ) total update time.Unfortunately, all of the above mentioned algorithms only support distance queries, but they cannothandle shortest-path queries. Recently, Chuzhoy and Khanna [CK19] attempted to fix this drawback,and obtained a randomized (1 + ǫ )-approximation adaptive-update algorithm with total expected up-date time b O ( n log L ), that supports shortest-path queries. Unfortunately, this algorithm has severalother drawbacks. First, it is randomized. Second, the expected query time of ˜ O ( n log L ) may bemuch higher than the desired time that is proportional to the number of edges on the returned path.Lastly, and most importantly, the algorithm only works in the more restricted setting where only vertex deletions are allowed, as opposed to the more standard and general model with edge deletions .Finally, a very recent work by Bernstein et al. [BvdBG + ǫ )-approximate algorithm with b O ( m √ n ) total update time that can return an approximateshortest path P in ˜ O ( n ) time (but not in time proportional to | E ( P ) | ). The algorithm is randomizedbut works against an adaptive adversary.As mentioned already, algorithms for approximate decremental SSSP are often used as subroutines inalgorithms for static graph problems, including various flow and cut problems that we discuss below.Typically, in these applications, the following properties are desired from the algorithm for decremental SSSP : • it should work against an adaptive adversary, and ideally it should be deterministic; • it should be able to handle edge deletions (as opposed to only vertex deletions); • it should support shortest-path queries, and not just distance queries; and • it should have query time for shortest-path queries that is close to O ( | E ( P ) | ), where P is thereturned path.In this paper we provide the first algorithm for decremental SSSP that satisfies all of the aboverequirements and improves upon the classical Ω( mn ) bound of Even and Shiloach [ES81]. The totalupdate time of the algorithm is b O ( n log L ), which is almost optimal for dense graphs. Theorem 1.1 (Weighted SSSP)
There is a deterministic algorithm, that, given a simple undirectededge-weighted n -vertex graph G undergoing edge deletions, a source vertex s , and a parameter ǫ ∈ (1 /n, , maintains a data structure in total update time b O ( n ( log Lǫ )) , where L is the ratio of largestto smallest edge weights, and supports the following queries: We emphasize that the vertex-decremental version is known to be strictly easier than the edge-decremental versionfor some problems. For example, there is a vertex-decremental algorithm for maintaining the exact distance between afixed pair ( s, t ) of vertices in unweighted undirected graphs using O ( n . ) total update time [San05] (later improvedto O ( n . ) in [vdBNS19]), but the edge-decremental version requires b Ω( n ) time when m = Ω( n ) assuming the OMvconjecture [FHNS15]. A similar separation holds for decremental exact APSP. dist-query ( s, v ) : in O (log log( nL )) time return an estimate g dist ( u, v ) , with dist G ( s, v ) ≤ g dist ( s, v ) ≤ (1 + ǫ ) dist G ( s, v ) ; and • path-query ( s, v ) : either declare that s and v are not connected in G in O (1) time, or return a s - v path P of length at most (1 + ǫ ) dist G ( s, v ) , in time b O ( | E ( P ) | log log L ) . Compared to the algorithm of [Ber17], our deterministic algorithm supports shortest-path, and not justdistance queries, while having the same total update time up to a subpolynomial factor. Comparedto the algorithm of [CK19], our algorithm handles the more general setting of edge deletions, isdeterministic, and has faster query time. Compared to the work of [BvdBG +
20] that is concurrentwith this paper, our algorithm is deterministic and has a faster query time, though its total updatetime is somewhat slower for sparse graphs.These improvements over previous works allow us to obtain faster algorithms for a number of classicalstatic flow and cut problems; see Appendices C and D for more details. Most of the resulting algorithmsare deterministic. For example, we obtain a deterministic algorithm for (1 + ǫ )-approximate minimumcost flow in unit edge-capacity graphs in b O ( n ) time. The previous algorithms by [LS14, AMV20] taketime ˜ O (min { m √ n, m / } ), that is slower in dense graphs. APSP.
Our understanding of decremental
APSP is also almost complete in the oblivious-adversarysetting, even in weighted graphs. Bernstein [Ber16], improving upon the works of Baswana et al.[BHS07] and Roditty and Zwick [RZ12], obtained a (1 + ǫ )-approximation algorithm with ˜ O ( mn log L )total update time, O (1) query time for distance queries, and ˜ O ( | E ( P ) | ) query time for shortest-pathqueries. These bounds are conditionally optimal for small approximation factors . Another line ofwork [BR11, FHN16, ACT14, FHN14a], focusing on larger approximation factors, recently culminatedwith a near-optimal result by Chechik [Che18]: for any integer k ≥
1, the algorithm of [Che18]provides a ((2 + ǫ ) k − b O ( mn /k log L ) total update time and O (log log( nL ))query time for distance queries and ˜ O ( | E ( P ) | ) query time for shortest-path queries. This result isnear-optimal because all parameters almost match the best static algorithm of Thorup and Zwick[TZ01]. Unfortunately, both algorithms of Bernstein [Ber16] and of Chechik [Che18] need to assumean oblivious adversary.In contrast, our current understanding of adaptive-update algorithms is very poor even for unweightedgraphs. The classical ES-Tree algorithm of Even and Shiloach [ES81] implies a deterministic algorithmfor decremental exact
APSP in unweighted graphs with O ( mn ) total update time and optimal querytime of O ( | E ( P ) | ) where P is the returned path. This running time was first improved by Forster,Henzinger, and Nanongkai [FHN16], who showed a deterministic (1 + ǫ )-approximation algorithmwith ˜ O ( mn ) total update time and O (log log n ) query time for distance queries. Recently, Gutenbergand Wulff-Nilsen [GWN20] significantly simplified this algorithm. Despite a long line of research,the state-of-the-art in terms of total update time remains ˜ O ( mn ), which can be as large as ˜Θ( n ) indense graphs, in any algorithm whose query time is below the O ( n ) bound. To highlight our lackof understanding of the problem, no adaptive algorithms that attain an o ( n ) total update time andquery time below O ( n ) for shortest-path queries are currently known for any density regime, even ifwe allow huge approximation factors, such as, for example, any o ( n )-approximation . Bernstein’s algorithm works even in directed graphs. Assuming the BMM conjecture [DHZ00, RZ11] or the OMv conjecture [FHNS15], 1 . b Ω( n ) total update time or b Ω( n ) query time in undirected unweighted graphs when m = Ω( n ). When we allow a factor- n approximation, one can use deterministic decremental connectivity algorithms(e.g. [HdLT01]) with ˜ O ( m ) total update time and O (log n ) query time for distance queries.
3n this work, we break this barrier by providing the first deterministic algorithm with sub-cubic total update time, that achieves a constant multiplicative and a subpolynomial additive approxi-mation:
Theorem 1.2 (Unweighted APSP)
There is a deterministic algorithm, that, given a simple un-weighted undirected n -vertex graph G undergoing edge deletions and a parameter ≤ k ≤ o (log / n ) ,maintains a data structure using total update time of b O ( n . /k ) and supports the following queries: • dist-query ( u, v ) : in O (log n log log n ) time return an estimate g dist ( u, v ) , where dist G ( u, v ) ≤ g dist ( u, v ) ≤ · k · dist G ( u, v ) + b O (1) ; and • path-query ( u, v ) : either declare that u and v are not connected in O (log n ) time, or return a u - v path P of length at most · k · dist G ( u, v ) + b O (1) , in b O ( | E ( P ) | ) time.The additive approximation term in dist-query and path-query is exp( O ( k log / n )) = b O (1) . For example, by letting k be a large enough constant, we can obtain a total update time of b O ( n . ),constant multiplicative approximation, and exp( O (log / n )) additive approximation.We note that the concurrent work of [BvdBG +
20] on dynamic spanners that was mentioned aboveimplies a randomized ˜ O (1)-multiplicative approximation adaptive-update algorithm for APSP with˜ O ( m ) total update time but it requires a large ˜ O ( n ) query time even for distance queries; in con-trast, our algorithm is deterministic and has faster query times: b O ( | E ( P ) | ) for shortest-path and O (log n log log n ) for distance queries. Technical Highlights.
Both our algorithms for
SSSP and
APSP are based on the
Layered CoreDecomposition ( LCD ) data structure introduced by Chuzhoy and Khanna [CK19]. Informally, onemay think of the data structure as maintaining a “compressed” version of the graph. Specifically, itmaintains a decomposition of the current graph G into a relatively small number of expanders (calledcores), such that every vertex of G either lies in one of the cores, or has a short path connecting it toone of the cores. The data structure supports approximate shortest-path queries within the cores, andqueries that return, for every vertex of G , a short path connecting it to one of the cores. Chuzhoy andKhanna [CK19] presented a randomized algorithm for maintaining the LCD data structure, as the graph G undergoes vertex deletions, with total update time b O ( n ). As our first main technical contribution,we improve and generalize their algorithm in a number of ways: first, our algorithm is deterministic;second, it can handle the more general setting of edge deletions and not just vertex deletions; weimprove the total update time to the near optimal bound of b O ( m ); and we improve the query timesof this algorithm. We further motivate this data structure and discuss the technical barriers that weneeded to overcome in order to obtain these improvements in Section 3. We believe that the LCD datastructure is of independent interest and will be useful in future adaptive-update dynamic algorithms.Indeed, a near-optimal short-path oracle on decremental expanders (from Section 3.2), which is one ofthe technical ingredients of our
LCD data structure, has already found further applications in otheralgorithms for dynamic problems [BGS20].Our second main contribution is a new generic method to exploit the Even-Shiloach tree (
ES-Tree )data structure . Many previous algorithms for SSSP and
APSP [BR11, FHN14a, FHN16, Che18] needto maintain a collection T of several ES-Trees . One drawback of this approach, is that, wheneverthe root of an
ES-Tree is disconnected due to a sequence of edge deletions, we need to reinitialize anew
ES-Tree , leading to high total update time. To overcome this difficulty, most such algorithms Here, we generally include variants such as the monotone
ES-Tree . at random , so that they are “hidden” from an obliviousadversary, and hence cannot be disconnected too often. Clearly, this approach fails completely againstan adaptive adversary, that can repeatedly delete edges incident to the roots of the trees.In order to withstand an adaptive adversary, we introduce the idea of “rooting an ES-Tree at anexpander” instead. As an expander is known to be robust against edge deletions even from an adaptiveadversary [NS17, NSW17, SW19], the adversary cannot disconnect the “expander root” of the tree toooften, leading to smaller total update time. The
LCD data structure naturally allows us to apply thishigh level idea, as it maintains a relatively small number of expander subgraphs (cores). This leads toour algorithm for
APSP in the small distance regime. We also use this idea to implement the short-path oracle on expanders. We believe that our general approach of “rooting a tree at an expander”instead of “rooting a tree at a random location” will be a key technique for future adaptive-updatealgorithms. This idea was already exploited in a different way in a recent subsequent work [BGS20].
Organization.
We provide preliminaries in Section 2. Section 3 focuses on our main technicalcontribution: the new
LCD data structure. We exploit this data structure to obtain our algorithms for
SSSP and
APSP in Section 4 and Section 5, respectively. The new cut/flow applications of our
SSSP algorithm (that exploit known reductions) appear in Appendices C and D.
All graphs considered in this paper are undirected and simple, so they do not have parallel edges orself loops. Given a graph G and a vertex v ∈ V ( G ), we denote by deg G ( v ) the degree of v in G .Given a length function ℓ : E ( G ) → R on the edges of G , for a pair u, v of vertices in G , we denoteby dist G ( u, v ) the length of the shortest path connecting u to v in G , with respect to the edge lengths ℓ ( e ). As the graph G undergoes edge deletions, the notation deg G ( v ) and dist G ( u, v ) always refer tothe current graph G . For a path P in G , we denote | P | = | E ( P ) | .Given a graph G and a subset S of its vertices, let G [ S ] be the subgraph of G induced by S . Wedenote by δ G ( S ) the total number of edges of G with exactly one endpoint in set S , and we let E G ( S ) be the set of all edges of G with both endpoints in S . Given two subsets A, B of verticesof G , we let E G ( A, B ) denote the set of all edges with one endpoint in A and another in B . The volume of a vertex set S is vol G ( S ) = P v ∈ S deg G ( v ). If S is a set of vertices with 1 ≤ | S | < | V ( G ) | ,then we may refer to S as a cut , and we denote S = V ( G ) \ S . We let the conductance of thecut S be Φ G ( S ) = δ G ( S )min { vol G ( S ) , vol G ( S ) } . We may omit the subscript G when clear from context. Wedenote vol( G ) = P v ∈ V ( G ) deg G ( v ) = 2 | E ( G ) | . Given a graph G , we let its conductance Φ( G ) be theminimum, over all cuts S , of Φ G ( S ). Notice that 0 ≤ Φ( G ) ≤ G isa ϕ -expander iff Φ( G ) ≥ ϕ .Suppose we are given a graph G and a sub-graph G ′ ⊆ G . We say that G ′ is a strong ϕ -expander withrespect to G iff for every partition ( S, S ) of V ( G ′ ) into non-empty subsets, δ G ′ ( S )min { vol G ( S ) , vol G ( S ) } ≥ ϕ (note that in the denominator, the volumes of the sets S, S of vertices are taken in graph G , not in G ′ as in the definition of ϕ -expansion of G ′ ). It is easy to verify that, if G ′ is a strong ϕ -expanderwith respect to G , then it is also a ϕ -expander. The following two simple observations follow from thedefinition of a strong ϕ -expander. Observation 2.1
Let G be a graph such that for all v ∈ V ( G ) , deg G ( v ) ≥ h for some h > , and let G ′ ⊆ G be a strong ϕ -expander with respect to G , for some < ϕ < , such that | V ( G ′ ) | ≥ . Then,for every vertex v ∈ V ( G ′ ) , deg G ′ ( v ) ≥ ϕh . roof: Assume otherwise, and let v ∈ V ( G ′ ) be any vertex with deg G ′ ( v ) < ϕh . Consider thecut ( S, S ) of V ( G ′ ), where S = { v } , and S = V ( G ′ ) \ { v } . Then vol G ( S ) ≥ h , vol G ( S ) ≥ h , but δ ( S ) = deg G ′ ( v ) < ϕh , contradicting the fact that G ′ is a strong ϕ -expander with respect to G . Observation 2.2
Let G be a graph and let G ′ be a sub-graph of G containing at least two vertices,such that G ′ is a strong ϕ -expander with respect to G , for some < ϕ < . Then, for every vertex v ∈ V ( G ′ ) with deg G ( v ) ≤ vol G ( V ( G ′ )) / , deg G ′ ( v ) ≥ ϕ deg G ( v ) must hold. Proof:
Consider the cut ( { v } , V ( G ′ ) \ { v } ) in G ′ . Then deg G ′ ( v )deg G ( v ) = δ G ′ ( { v } )min { vol G ( { v } ) , vol G ( V ( G ′ ) \{ v } ) } ≥ ϕ must hold, as G ′ is a strong ϕ -expander with respect to G . Therefore, deg G ′ ( v ) ≥ ϕ deg G ( v ).Given a graph G , its k -orientation is an assignment of a direction to each undirected edge of G , sothat each vertex of G has out-degree at most k . For a given orientation of the edges, for each vertex u ∈ V ( G ), we denote by in-deg G ( u ) and out-deg G ( u ) the in-degree and out-degree of u , respectively.Note that, if G has a k -orientation, then for every subset S ⊆ V of its vertices, | E G ( S ) | ≤ k · | S | musthold, and, in particular, | E ( G ) | ≤ k · | V ( G ) | . We say that a set F ⊆ E ( G ) of edges has a k -orientationif the graph induced by F has a k -orientation. Decremental Connectivity/Spanning Forest.
We use the results of [HdLT01], who provide adeterministic data structure, that we denote by
CONN-SF ( G ), that, given an n -vertex unweightedundirected graph G , that is subject to edge deletions, maintains a spanning forest of G , with totalupdate time O (( m + n ) log n ), where m is the number of edges in the initial graph G . Moreover, thedata structure supports connectivity queries conn ( G, u, v ): given a pair u, v of vertices of G , return“yes” if u and v are connected in G , and “no” otherwise. The running time to respond to each suchquery is O (log n/ log log n ). Even-Shiloach Trees.
Suppose we are given a graph G = ( V, E ) with integral lengths ℓ ( e ) ≥ e ∈ E , a source s , and a distance bound D ≥
1. Even-Shiloach Tree (
ES-Tree ) algorithmmaintains a shortest-path tree from vertex s , that includes all vertices v with dist ( s, v ) ≤ D , and,for every vertex v with dist ( s, v ) ≤ D , the distance dist ( s, v ). Typically, ES-Tree only supports edgedeletions (see, e.g. [ES81, Din06, HK95]). However, as shown in [BC16, Lemma 2.4], it is easy toextend the data structure to also handle edge insertions in the following two cases: either (i) at leastone of the endpoints of the inserted edge is a singleton vertex, or (ii) the distances from the source s toother vertices do not decrease due to the insertion. We denote the corresponding data structure from[BC16] by ES-Tree ( G, s, D ). It was shown in [BC16] that the total update time of
ES-Tree ( G, s, D ),including the initialization and all edge deletions, is O ( mD + U ), where U is the total number ofupdates (edge insertions or deletions), and m is the total number of edges that ever appear in G . Greedy Degree Pruning.
We consider a simple degree pruning procedure defined in [CK19]. Givena graph H and a degree bound d , the procedure computes a vertex set A ⊆ V ( H ), as follows. Startwith A = V ( H ). While there is a vertex v ∈ A , such that fewer than d neighbors of v lie in A , remove v from A . We denote this procedure by Proc-Degree-Pruning ( H, d ) and denote by A the output of theprocedure. The following observation was implicitly shown in [CK19]; for completeness, we provideits proof in Appendix. Observation 2.3
Let A be the outcome of procedure Proc-Degree-Pruning ( H, d ) , for any graph H andinteger d . Then A is the unique maximal vertex set such that every vertex in H [ A ] has degree at least d . That is, for any subset A ′ of V ( H ) where H [ A ′ ] has minimum degree at least d , A ′ ⊆ A must hold. H that undergoes edge deletions, and let A denote the outcome of procedure Proc-Degree-Pruning ( H, d ) when applied to the current graph. Notice that, from the above observation,set A is a decremental vertex set , that is, vertices can only leave the set, as edges are deleted from H .We use the following algorithm, that we call Alg-Maintain-Pruned-Set ( H, d ), that allows us to maintainthe set A as the graph H undergoes edge deletions; the algorithm is implicit in [CK19].The algorithm Alg-Maintain-Pruned-Set ( H, d ) starts by running
Proc-Degree-Pruning ( H, d ) on the orig-inal graph H . Recall that the procedure initializes A = V ( H ), and then iteratively deletes from A vertices v that have fewer than d neighbors in A . In the remainder of the algorithm, we simply main-tain the degree of every vertex in H [ A ] as H undergoes edge deletions. Whenever, for any vertex v ,deg H [ A ] ( v ) falls below d , we remove v from A . Observe that vertex degrees in H [ A ] are monotonicallydecreasing. Moreover, each degree decrement at a vertex v can be charged to an edge that is incidentto v and was deleted from H [ A ]. As each edge is charged at most twice, the total update time is O ( | E ( H ) | + | V ( H ) | ). Therefore, we obtain the following immediate observation. Observation 2.4
The total update time of
Alg-Maintain-Pruned-Set is O ( m + | V ( H ) | ) , where m is thenumber of edges that belonged to graph H at the beginning. Moreover, whenever the algorithm removessome vertex v from set A , vertex v has fewer than d neighbors in A in the current graph H . Our main technical contribution is a data structure called
Layered Core Decomposition ( LCD ), thatimproves and generalizes the data structure introduced in [CK19]. In order to define the data structure,we need to introduce the notions of virtual vertex degrees , and a partition of vertices into layers , whichwe do next.Suppose we are given an n -vertex m -edge graph G = ( V, E ) and a parameter ∆ >
1. We emphasizethat throughout this section, the input graph G is unweighted, and the length of a path P in G isthe number of its edges. Let d max be the largest vertex degree in G . Let r be the smallest integer,such that ∆ r − > d max . Note that r ≤ O (log ∆ n ). Next, we define degree thresholds h , h , . . . , h r ,as follows: h j = ∆ r − j . Therefore, h > d max , h r = 1, and for all 1 < j ≤ r , h j = h j − / ∆. Forconvenience, we also denote h r +1 = 0. Definition. (Virtual Vertex Degrees and Layers)
For all ≤ j ≤ r , let A j be the outcome of Proc-Degree-Pruning ( G, h j ) , when applied to the current graph G . The virtual degree g deg( v ) of v in G is the largest value h j such that v ∈ A j . If no such value exists, then g deg( v ) = h r +1 = 0 . For all ≤ j ≤ r + 1 , let Λ j = { v | g deg( v ) = h j } denote the set of vertices whose virtual degree is h j . We call Λ j the j th layer . Note that for every vertex v ∈ V ( G ), g deg( v ) ∈ { h , . . . , h r +1 } . Also, Λ = ∅ since all vertex degreesare below h , and Λ r +1 , the set of vertices with virtual degree 0, contains all isolated vertices. For all1 ≤ j ′ < j ≤ r + 1, we say that layer Λ j ′ is above layer Λ j . For convenience, we write Λ ≤ j = S jj ′ =1 Λ j ′ and Λ
Throughout the algorithm, for each ≤ j ≤ r + 1 , for each vertex u ∈ Λ j , deg ≤ j ( u ) ≥ h j . Therefore, the minimum vertex degree in G [Λ ≤ j ] is always at least h j .
7s observed already, from Observation 2.3, over the course of the algorithm, vertices may only bedeleted from Λ ≤ j = A j . This immediately implies the following observation: Observation 3.2
As edges are deleted from G , for every vertex v , g deg( v ) may only decrease. Throughout, we denote by n ≤ j the number of vertices that belonged to Λ ≤ j at the beginning ofthe algorithm, before any edges were deleted from the input graph. Observe that n ≤ j h j ≤ m byObservation 3.1. The proof of the following observation appears in Appendix. Observation 3.3
For all ≤ j ≤ r , let E ≥ j be the set of all edges, such that at any point of timeat least one endpoint of e lied in Λ ≥ j . Then E ≥ j has a (∆ h j ) -orientation, and so | E ≥ j | ≤ ∆ h j n .Moreover, the total number of edges e , such that, at any point of the algorithm’s execution, bothendpoints of e lied in Λ j , is bounded by n ≤ j h j ∆ . From Observation 3.1, all vertex degrees in G [Λ ≤ j ] are at least h j , so, in a sense, graph G [Λ ≤ j ] isa high-degree graph. One advantage of high-degree graphs is that every pair of vertices lying in thesame connected component of such a graph must have a short path connecting them; specifically, itis not hard to show that, if u, v are two vertices lying in the same connected component C of graph G [Λ ≤ j ], then there is a path connecting them in C , of length at most O ( | V ( C ) | /h j ). This propertyof graphs G [Λ ≤ j ] is crucial to our algorithms for SSSP and
APSP , and one of the goals of the
LCD data structure is to support short-path queries: given a pair of vertices u, v ∈ Λ ≤ j , either report thatthey lie in different connected components of G [Λ ≤ j ], or return a path of length at most roughly O ( | V ( C ) | /h j ) connecting them, where C is the connected component of G [Λ ≤ j ] containing u and v .Additionally, one can show that a high-degree graph must contain a core decomposition . Specifically,suppose we are given a simple n -vertex graph H , with minimum vertex degree at least h . Intuitively,a core of H is a vertex-induced sub-graph K ⊆ H , such that, for ϕ = Ω(1 / log n ), graph K is a ϕ -expander, and all vertex degrees in K are at least ϕh/
3. One can show that, if K is a core, thenits diameter is O (log n/ϕ ), and it is ( ϕh/ core decomposition of H is a collection F = { K , . . . , K t } of vertex-disjoint cores, such that, for each vertex u / ∈ S K ∈F V ( K ), there are atleast 2 h/ O (log n ) from u to vertices in S K ∈F V ( K ). The results of[CK19] implicitly show the existence of a core decomposition in a high-degree graph, albeit with a muchmore complicated definition of the cores and of the decomposition. For completeness, in Section B.2of the Appendix, we formally state and prove a theorem about the existence of a core decompositionin a high-degree graph. Though we do not need this theorem for the results of this paper, we feelthat it is an interesting graph theoretic statement in its own right, that in a way motivates the LCD data structure, whose intuitive goal is to maintain a layered analogue of the core decomposition of theinput graph G , as it undergoes edge deletions.Formally, the LCD data structure receives as input an (unweighted) graph G undergoing edge deletions,and two parameters ∆ ≥ ≤ q ≤ o (log / n ). It maintains the partition of V ( G ) into layersΛ , . . . , Λ r +1 , as described above, and additionally, for each layer Λ j , the data structure maintainsa collection F j of vertex-disjoint subgraphs of the graph H j = G [Λ j ], called cores (while we do notformally have any requirements from the cores, e.g. we do not formally require that a core is anexpander, our algorithm will in fact still ensure that this is the case, so the intuitive description ofthe cores given above matches what our algorithm actually does). Throughout, we use an additionalparameter γ ( n ) = exp( O (log / n )) = b O (1). The data structure is required to support the followingthree types of queries: • Short-Path ( j, u, v ): Given any pair of vertices u and v from Λ ≤ j , either report that u and v liein different connected component of G [Λ ≤ j ], or return a simple path P connecting u to v in8 [Λ ≤ j ] of length O ( | V ( C ) | ( γ ( n )) O ( q ) /h j ) = b O ( | V ( C ) | /h j ), where C is the connected componentof G [Λ ≤ j ] containing u and v . • To-Core-Path ( u ): Given any vertex u , return a simple path P = ( u = u , . . . , u z = v ) of length O (log n ) from u to a vertex v that lies in some core in S j F j . Moreover, path P must visit thelayers in a non-decreasing order, that is, if u i ∈ Λ j then u i +1 ∈ Λ ≤ j . • Short-Core-Path ( K, u, v ): Given any pair of vertices u and v , both of which lie in some core K ∈ S j F j , return a simple u - v path P in K of length at most ( γ ( n )) O ( q ) = b O (1).We now formally state one of our main technical results - an algorithm for maintaining the LCD datastructure under edge deletions.
Theorem 3.4 (Layered Core Decomposition)
There is a deterministic algorithm that, given asimple unweighted n -vertex m -edge graph G = ( V, E ) undergoing edge deletions, and parameters ∆ ≥ and ≤ q ≤ o (log / n ) , maintains a partition (Λ , . . . , Λ r +1 ) of V into layers, where for all ≤ j ≤ r + 1 , each vertex in Λ j has virtual degree h j . Additionally, for each layer Λ j , the algorithmmaintains a collection F j of vertex-disjoint subgraphs of the graph H j = G [Λ j ] , called cores . Thealgorithm supports queries Short-Path ( j, u, v ) in time O (log n ) if u and v lie in different connectedcomponents of G [Λ ≤ j ] , and in time O ( | P | ( γ ( n )) O ( q ) ) = b O ( | P | ) otherwise, where P is the u - v pathreturned. Additionally, it supports queries To-Core-Path ( u ) with query time O ( | P | ) , where P is thereturned path, and Short-Core-Path ( K, u, v ) with query time ( γ ( n )) O ( q ) = b O (1) . For all ≤ j ≤ h + 1 ,once a core K is added to F j for the first time, it only undergoes edge- and vertex-deletions, until K = ∅ holds. The total number of cores ever added to F j throughout the algorithm is at most b O ( n ∆ /h j ) . Thetotal update time of the algorithm is b O ( m /q ∆ /q ( γ ( n )) O ( q ) ) = b O ( m /q ∆ /q ) . For intuition, it is convenient to set the parameters ∆ = 2 and q = log / n , which is also the settingthat we use in algorithms for SSSP and for
APSP in the large-distance regime. For this setting,( γ ( n )) O ( q ) = b O (1), and the total update time of the algorithm is b O ( m ). Optimality.
The guarantees of the
LCD data structure from Theorem 3.4 are close to optimal inseveral respects. First, the total update time of b O ( m ) and the query time for Short-Core-Path and
To-Core-Path are clearly optimal to within a subpolynomial in n factor. The length of the pathreturned by Short-Path queries is almost optimal in the sense that there can exist a path P in aconnected component C of G [Λ ≤ j ] whose length is Ω( | V ( C ) | /h j ); the query time of b O ( | P | ) is almostoptimal as well. The bound on the total number of cores ever created in Λ j is also near optimal. Thisis because, even in the static setting, there exist graphs with minimum degree h j that require ˜Ω( n/h j )cores in order to guarantee the desired properties of a core decomposition. Comparison with the Algorithm of [CK19] and Summary of Main Challenges
The data structure from [CK19] supports the same set of queries, but has several significant drawbackscompared to the results of Theorem 3.4. First, the algorithm of [CK19] is randomized. Moreover, itcan only handle vertex deletions, as opposed to the more general and classical setting of edge deletions(which is also required in some applications to static flow and cut problems). Additionally, the totalupdate time of the algorithm of [CK19] is b O ( n ), as opposed to the almost linear running time of b O ( m )of our algorithm. For every index j , the total number of cores ever created in Λ j can be as large as b O ( n /h j ) in the algorithm of [CK19], as opposed to the bound of b O ( n/h j ) that we obtain; this bounddirectly affects the running of our algorithm for APSP . Lastly, the query time for
Short-Path ( j, u, v ) isonly guaranteed to be bounded by b O ( | V ( C ) | ) in [CK19], where C is a connected component of Λ ≤ j to9hich u and v belong, as opposed to our query time of b O ( | P | ), where P is the u - v path returned. Thisfaster query time is essential in order to obtain the desired query time of b O ( | P | ) in our algorithms for SSSP and
APSP . Next, we describe some of the challenges to achieving these improvements, and alsosketch some ideas that allowed us to overcome them.
Vertex deletions versus edge deletions.
The algorithm of [CK19] maintains, for every index1 ≤ j ≤ r , a variation of the core decomposition (that is based on vertex expansion) in graph G [Λ j ]. This decomposition can be computed in almost linear time b O ( | E (Λ j ) | ) = b O ( nh j ), which isclose to the best time one can hope for, creating an initial set F j of at most b O ( n/h j ) cores. Sinceevery core K ∈ F j has vertex degrees at least h j /n o (1) , the decomposition can withstand up to h j / (2 n o (1) ) vertex deletions, while maintaining all its crucial properties. However, after h j / (2 n o (1) )vertex deletions, some cores may become disconnected, and the core decomposition structure mayno longer retain the desired properties. Therefore, after every batch of roughly h j / (2 n o (1) ) vertexdeletions, the algorithm of [CK19] recomputes the core decomposition F j from scratch. Since theremay be at most n vertex-deletion operations throughout the algorithm, the core decomposition F j only needs to be recomputed at most b O ( n/h j ) times throughout the algorithm, leading to the totalupdate time of b O ( n/h j ) · b O ( | E (Λ j ) | ) = b O ( n ). The total number of cores that are ever added to F j over the course of the algorithm is then bounded by b O ( n/h j ) · b O ( n/h j ) = b O ( n /h j ).Consider now the edge-deletion setting. Even if we are willing to allow a total update time of b O ( n ),we cannot hope to perform a single computation of the decomposition F j in time faster than linearin | E (Λ j ) | , that is, O ( nh j ). Therefore, we can only afford at most O ( n/h j ) such re-computationsover the course of the algorithm. Since the total number of edges in graph G [Λ j ] may be as large asΘ( nh j ), our core decomposition must be able to withstand up to h j edge deletions. However, evenafter just h j edge deletions, some vertices of Λ j may become disconnected in graph G [Λ ≤ j ], and someof the cores may become disconnected as well. In order to overcome his difficulty, we first observethat it takes h j /n o (1) edge deletions before a vertex in Λ j becomes “useless”, which roughly meansthat it is not well-connected to other vertices in Λ j . Similarly to the algorithm of [CK19], we wouldnow like to recompute the core decomposition F j only after h j / (2 n o (1) ) vertices of Λ j become useless,which roughly corresponds to h j /n o (1) edge deletions. Additionally, we employ the expander pruningtechnique from [SW19] in order to maintain the cores so that they can withstand this significantnumber of edge deletions. As in [CK19], this approach can lead to b O ( n ) total update time, ensuringthat the total number of cores that are ever added to set F j is at most b O ( n /h j ). Obtaining faster total update time and fewer cores.
Even with the modifications describedabove, the resulting total update time is only b O ( n ), while our desired update time is near-linear in m . It is not hard to see that recomputing the whole decomposition F j from scratch every time is tooexpensive, and with the b O ( m ) total update time we may only afford to do so at most b O (1) times. Inorder to overcome this difficulty, we further partition each layer Λ j into sublayers Λ j, , Λ j, , . . . , Λ j,L j whose size is geometrically decreasing (that is, | Λ j,ℓ | ≈ | Λ j,ℓ − | / ℓ ). The core decompositions F j,ℓ will be computed in each sub-layer separately, and the final core decomposition for layer j thatthe algorithm maintains is F j = S ℓ F j,ℓ . In general, we guarantee that, for each ℓ , | Λ j,ℓ | ≤ n ≤ j / ℓ − always holds, and we recompute the core decomposition F j,ℓ for sublayer at Λ j,ℓ at most b O (2 ℓ ) times.We use Observation 3.3 to show that | E (Λ j,ℓ ) | ≤ h j ∆ · n ≤ j / ℓ − = O ( m/ ℓ ) must hold. Therefore,the total time for computing core decompositions within each sublayer is b O ( m ). As there are O (log n )sublayers within a layer, the total time for computing the decompositions over all layers is b O ( m ). Thisgeneral idea is quite challenging to carry out, since, in contrast to layers Λ , . . . , Λ r +1 , where verticesmay only move from higher to lower layers throughout the algorithm, the vertices of a single layer canmove between its sublayers in a non-monotone fashion. One of the main challenges in the design ofthe algorithm is to design a mechanism for allowing the vertices to move between the sublayers, so10hat the number of such moves is relatively small. Improving query times.
The algorithm of [CK19] supports
Short-Core-Path ( K, u, v ) queries, thatneed to return a short path inside the core K connecting the pair u, v of its vertices, in time ˜ O ( | V ( K ) | )+ b O (1), returning a path of length b O (1); in contrast our algorithm takes time b O (1). The query time of Short-Core-Path ( K, u, v ) in turn directly influences the query time of
Short-Path ( u, v ) queries, whichin turn is critical to the final query time that we obtain for SSSP and
APSP problems. Another wayto view the problem of supporting
Short-Core-Path ( K, u, v ) queries is the following: suppose we aregiven an expander graph K that undergoes edge- and vertex-deletions (in batches). We are guaranteedthat after each batch of such updates, the remaining graph K is still an expander, and so every pairof vertices in K has a path of length O (poly log n ) connecting them. The goal is to support “short-path” queries: given a pair u , v of vertices of K , return a path of length b O (1) connecting them. Theproblem seems interesting in its own right, and, for example, it plays an important role in the recentfast deterministic approximation algorithm for the sparsest cut problem of [CGL + Short-Core-Path ( K, u, v ) query, simply perform a breadth-first searchin the core K to find the required u - v path, leading to the high query time. Instead, we develop amore efficient algorithm for supporting short-path queries in expander graphs, that is similar in spiritand in techniques to the algorithm of [CGL + Short-Path ( u, v ) queries, the guarantees of [CK19] are similar to our guarantees in terms of thelength of the path returned, but their query processing time is too high, and may be as large as ˜Ω( n )in the worst case. We improve the query time to b O ( | P | ), where P is the returned path, which isclose to the best possible bound. This improvement is necessary in order to obtain faster algorithmsfor several applications to cut and flow problems that we discuss. The improvement is achieved byexploiting the improved data structure that supports Short-Core-Path queries within the cores, and byemploying a minimum spanning tree data structure on top of the core decomposition, instead of usingdynamic connectivity as in the algorithm of [CK19].
Randomized versus Deterministic Algorithm.
While the algorithm of [CK19] works againstan adaptive adversary, it is a randomized algorithm. The two main randomized components of thealgorithm are: (i) an algorithm to compute a core decomposition; and (ii) data structure that supports
Short-Core-Path ( K, u, v ) queries within each core. For the first component, we exploit the recent fastdeterministic algorithm for the Balanced Cut problem of [CGL + Short-Core-Path ( K, u, v ) querieswithin the cores. These changes lead to a deterministic algorithm for the
LCD data structure.
Using the
LCD
Data Structure for
SSSP and
APSP
With our improved implementation of the
LCD data structure, using the same approach as that of[CK19], we immediately obtain the desired algorithm for
SSSP , proving Theorem 1.1.Our algorithm for
APSP in the large-distances regime exploits the
LCD data structure in a similarway as in the algorithm for
SSSP : We use the
LCD data structure in order to “compress” the denseparts of the graph. In the sparse part, instead of maintaining a single
ES-Tree , as in the algorithmfor
SSSP , we maintain the deterministic tree cover of [GWN20] (which simplifies the moving
ES-Tree data structure of [FHN16]).Our algorithm for
APSP in the small-distances regime uses a tree cover approach, similar to previouswork [BR11, FHN14a, FHN16, Che18]. The key difference is that we root each
ES-Tree at one of thecores maintained by the
LCD data structure (recall that each core is a high-degree expander), insteadof rooting it at a random vertex. 11he remainder of this section is dedicated to the proof of Theorem 3.4. However, the statement of thistheorem is sufficient in order to obtain our results for
SSSP and
APSP , that are discussed in Section 4and Section 5, respectively.
Implementation of Layered Core Decomposition
In the remainder of this section, we provide the proof of Theorem 3.4 by showing an implementationof the
LCD data structure, which is the central technical tool of this paper. We start by observingthat all layers Λ , . . . , Λ r can be maintained in near linear time: Observation 3.5
There is a deterministic algorithm, that, given an n -vertex m -edge graph G under-going edge deletions and parameter ∆ ≥ , maintains the partition (Λ , . . . , Λ r ) of V ( G ) into layers.Additionally, for every vertex v ∈ V and index ≤ j ≤ r + 1 , the algorithm maintains a list of allneighbors of v that belong to Λ j . The total update time of the algorithm is ˜ O ( m + n ) . Proof:
We maintain the partition (Λ , . . . , Λ r +1 ) of V ( G ) into layers, as graph G undergoes edgedeletions, as follows. We run Alg-Maintain-Pruned-Set ( G, h j ) for maintaining the vertex set A j , for all1 ≤ j ≤ r in parallel. Whenever a vertex v ∈ Λ j is deleted from A j by this algorithm, we update itslayer accordingly. It is easy to verify that the total update time for maintaining the partition of V ( G )into layers is O (( | E ( G ) | + | V ( G ) | ) · r ) = ˜ O ( m + n ).Additionally, for every vertex v ∈ V ( G ) and index 1 ≤ j ≤ r + 1, we maintain a list of all neighborsof v that lie in Λ j . In order to maintain this list, whenever a vertex u ∈ Λ i is removed from set A i − , we inspect the lists of all neighbors of u , and for each such neighbor v , we move u to the list ofneighbors of v corresponding to the new layer of u . Therefore, whenever a virtual degree of a vertex u decreases, we spend O (deg G ( u )) time to update the lists of its neighbors. As virtual degrees candecrease at most r times for every vertex, the total update time for initializing and maintaining theselists is O ( | E ( G ) | + | V ( G ) | ) · r = ˜ O ( m + n ).The remainder of the section is organized as follows. First, we list some known tools related toexpanders in Section 3.1 and then, in Section 3.2, provide a new tool, called a short-path oracle ondecremental expanders that will be useful for Short-Core-Path queries. One of our key ideas is to furtherpartition each layer Λ j into sublayers Λ j, , . . . , Λ j,L j . We describe the structure of the sublayers andthe invariants that we maintain for each sublayer in Section 3.3. For every sublayer Λ j,ℓ , the executionof the algorithm is partitioned into phases with respect to that sublayer, that we refer to as ( j, ℓ ) -phases . At the beginning of each ( j, ℓ )-phase, we compute a core decomposition of graph G [Λ j,ℓ ] andobtain a collection F j,ℓ of cores for the sublayer Λ j,ℓ . Section 3.4 describes the algorithm for computingthe core decompositions. The description of an algorithm that we use to maintain each core and tosupport Short-Core-Path queries on each core is shown in Section 3.5. During each ( j, ℓ )-phase, verticescan move between the sublayers of layer j in a non-monotone manner (in contract to the fact thatevery vertex can only move from higher to lower layers). We describe how vertices are moved betweensublayers in Section 3.6 and state the key technical lemma that bounds the total number of timesvertices may move between sublayers. We then bound the total number of cores ever created by thealgorithm in Section 3.7; this bound is crucial for our LCD data structure. In Section 3.9, we show analgorithm for processing
To-Core-Path queries. We provide additional technical details for maintainingall necessary data structures in Section 3.10 and Section 3.11. Finally, we describe the algorithm forresponding to
Short-Path queries in Section 3.12. 12 .1 Known Expander-Related Tools
In this subsection we describe several expander-related tools, that mostly follow from previous work,that our algorithm uses.
Expander Decomposition.
The following theorem can be obtained immediately from the recentdeterministic algorithm of [CGL +
19] for computing a (standard) expander decomposition in almost-linear time; the proof is deferred to Appendix B.3. As before, we denote γ ( n ) = exp( O (log / n )) = n o (1) . Theorem 3.6
There is a deterministic algorithm, that, given a connected graph G = ( V, E ) with n vertices and m edges, and a parameter ≤ ϕ ≤ , computes a partition of V into disjoint subsets V , . . . , V k , such that P ki =1 δ ( V i ) ≤ γ ( m ) · ϕm , and, for all ≤ i ≤ k , G [ V i ] is a strong ϕ -expanderwith respect to G . The running time of the algorithm is b O ( m ) . Expander Pruning.
In the following theorem, we consider the setting where we are given as inputa graph G = ( V, E ), with | E | = m . Intuitively, we hope that G is a ϕ -expander for some 0 ≤ ϕ ≤ G is represented by its adjacency list. We also assumethat we are given an input sequence σ = ( e , e , . . . , e k ) of online edge deletions, and we denote by G i the graph G at time i , that is, G is the original graph G , and for all 1 ≤ i ≤ k , G i = G \ { e , . . . , e i } .Our goal is to maintain a set S ⊆ V of vertices, such that, intuitively, if we denote by S i the set S attime i (that is, after the deletion of the first i edges from G ), then G [ V \ S i ] is large enough comparedto G . Moreover, if G was a ϕ -expander, then for all i , G [ V \ S i ] remains a strong enough expander.We also require that the set S is incremental, that is, S i − ⊆ S i for all i . The following theorem,proved in [SW19] allows us to do so. Theorem 3.7 (Restatement of Theorem 1.3 in [SW19])
There is a deterministic algorithm, that,given an access to the adjacency list of a graph G = ( V, E ) with | E | = m , a parameter < ϕ ≤ , anda sequence σ = ( e , e , . . . , e k ) of k ≤ ϕm/ online edge deletions, maintains a vertex set S ⊆ V withthe following properties. Let G i be the graph G after the edges e , . . . , e i have been deleted from it; let S = ∅ be the set S at the beginning of the algorithm, and for all < i ≤ k , let S i be the set S afterthe deletion of e , . . . , e i . Then, for all ≤ i ≤ k : • S i − ⊆ S i ; • vol G ( S i ) ≤ i/ϕ ; • | E ( S i , V \ S i ) | ≤ i ; and • if G is a ϕ -expander, then G i [ V \ S i ] is a strong ϕ/ -expander with respect to G .The total running time of the algorithm is O ( k log m/ϕ ) . Embeddings.
Let
G, W be two graphs with V ( W ) ⊆ V ( G ). A set P of paths in G is called an embedding of W into G if, for every edge e = ( u, v ) ∈ E ( W ), there is a path path( u, v ) ∈ P , suchthat path( u, v ) is a u - v path in G . We say that the length of the embedding P is l if the length ofevery path in P is at most l , and we say that the congestion of the embedding is η iff every edgeof G participates in at most η paths in P . If embedding P has length l and congestion η , then wesometimes call it an ( l, η )-embedding, and we say that W ( l, η )-embeds into G .13he following algorithm allows us to quickly embed a smaller expander into a given expander; theproof appears in Appendix B.4. Recall that we denoted γ ( n ) = exp( O (log / n )). Theorem 3.8
There is a deterministic algorithm that, given an n -vertex m -edge graph G which isa ϕ -expander, and a terminal set T ⊆ V ( G ) , computes a graph W with V ( W ) = T and maximumvertex degree O (log | T | ) such that W is a (1 /γ ( | T | )) -expander. The algorithm also computes a ( l, η ) -embedding P of W into G with l = O ( ϕ − log m ) and η = O ( ϕ − log m ) . The running time of thealgorithm is ˜ O ( m · γ ( | T | ) /ϕ ) . Based on known expander-related tools from the previous section, we provide a new tool, that we calla short-path oracle on decremental expanders . This will be a key tool for
Short-Core-Path queries. Webelieve that the techniques used in this section are of independent interest as they are quite generic.In fact, they have already been subsequently generalized to directed graphs in [BGS20]. We fix thefollowing parameters that will be used throughout this section. We set the parameters as follows: γ = γ ( m ) = exp( O (log / m )) = n o (1) ; (1)and ϕ = 1 / ( c · γ ) , (2)where c is a large enough constant.Below, we say that a vertex set S is incremental if vertices in S can never leave S as time progresses. Theorem 3.9
There is a deterministic algorithm that, given an m -edge n -vertex ϕ -expander G under-going a sequence at most ϕ | E ( G ) | / edge deletions, and a parameter q > , maintains an incremental vertex set S ⊆ V ( G ) , such that, if we denote by G (0) the graph G before any edge deletions, then,for every t > , after t edges are deleted from G , vol G (0) ( S ) ≤ O ( t/ϕ ) holds and G \ S is a strong ϕ/ -expander with respect to G (0) . The algorithm also supports the following query: given a pair ofvertices u, v ∈ V ( G ) \ S , return a simple u - v path P in G \ S of length at most ( γ ( m )) O ( q ) , with querytime ( γ ( m )) O ( q ) . The total update time of the algorithm is O ( m /q ( γ ( n )) O ( q ) ) . The remainder of this section is dedicated to proving Theorem 3.9.Throughout the algorithm, m denotes the number of edges in the original ϕ -expander graph G , and theparameter ϕ = 1 / ( cγ ( m )) remains unchanged. As our main tools, we employ the Expander PruningAlgorithm from Theorem 3.7, and the algorithm from Theorem 3.8 that allows us to embed a smallerexpander into a given expander. We use parameters l = O (log m/ϕ ) and η = O (log m/ϕ ). The ideaof the algorithm is to maintain a hierarchy of expander graphs G , . . . , G q , where for all 1 ≤ i < q ,graph G i contains (cid:6) m i/q (cid:7) vertices, and it is a ϕ/ G q = G . We will also maintainan ( l, η )-embedding P i of each such graph G i into graph G i +1 . Initially, for all 1 ≤ i < q , both thegraph G i and its embedding into G i +1 are computed using Theorem 3.8. Additionally, we maintainan ES-Tree in graph G i +1 , rooted at the vertex set V ( G i ), with distance threshold O (log n/ϕ ). Forevery edge e ∈ E ( G i ), we will maintain a list J i ( e ) of all edges e ′ ∈ E ( G i − ), such that the embeddingof e ′ in P i − contains the edge e ; recall that | J ( e ) | ≤ η must hold. Whenever edge e is deleted fromgraph G i , this will trigger the deletion of all edges in its list J i ( e ) from graph G i − . Lastly, we use the14 lgorithm 1 InitializeExpander ( i ) Assertion: G i is a ϕ/ i = 1, then initialize an ES-Tree T in G , rooted at an arbitrary vertex, with distance threshold O ( ϕ − log n ); return.2. If i = q , then let X q be an arbitrary subset of the set { x e | e ∈ E ( G ) } of vertices of G ′ q ofcardinality (cid:6) m ( q − /q (cid:7) ; otherwise, set G ′ i = G i , and let X i be any subset of V ( G ′ i ) of cardinality (cid:6) m ( i − /q (cid:7) .3. Using the algorithm from Theorem 3.8, compute an expander G i − over vertex set X i , and its( l, η )-embedding P i − into G ′ i .4. For every edge e ∈ E ( G i ), initialize a list J i ( e ) of all edges of G i − whose embedding path in P i − contains e .5. Initialize the expander pruning algorithm from Theorem 3.7 on G i − , that will maintain a prunedvertex set S i − ⊆ V ( G i − ).6. Initialize an ES-tree T i in G ′ i rooted at X i , with distance threshold O ( ϕ − log n ).7. Call InitializeExpander ( i − G i , the set S i of “pruned-out”vertices. When set S i becomes too large, we re-initialize the graphs G i , G i − , . . . , G .The outcome of the algorithm is vertex set S = S q , the pruned-out set that we maintain for theexpander G q = G . Recall that G (0) denotes the graph G before any edge deletions. Theorem 3.7directly guarantees that G \ S is a strong ϕ/ G (0) and vol G (0) ( S ) ≤ O ( t/ϕ )after t edge deletions as desired.We note that, since G q may be a high-degree graph, it is convenient to define a new graph G ′ q ,that is obtained from G q by sub-dividing every edge e of G q = G with a new vertex v e . We let X = { v e | e ∈ E ( G ) } . It is easy to verify that G ′ q remains a ϕ/ InitializeExpander ( i ); the algo-rithm initializes the data structures for expander G i − , assuming that expander G i is already defined.The algorithm then recursively calls to InitializeExpander ( i − InitializeExpander ( q ). If, over the course of thealgorithm, for some 1 ≤ i < q , the number of edges deleted from G i exceeds ϕ | E ( G i ) | /
10, we will call
InitializeExpander ( i − G (0) i − the expander graph created by Procedure InitializeExpander ( i ). For all d > G ( d ) i − the graph that is obtained from G (0) i − after d edge deletions from G . As d increases,our algorithm maintains the graph G i − = G ( d ) i − \ S i − . By Theorem 3.7, as long as d ≤ ϕ | E ( G i ) | / G i − remains a ( ϕ/ e is deleted from graph G , we call Algorithm Delete ( q, e ). The algorithm mayrecursively call to procedure Delete ( i, e ′ ) for other expander graphs G i and edges e ′ . The algorithm Delete ( i, e ′ ) is shown in Algorithm 2.We bound the total update time of the algorithm in the following lemma.15 lgorithm 2 Delete ( i, e ) where e ∈ E ( G i )1. If i = 1, delete e from graph G . Recompute an ES-Tree T in graph G , up to depth O (log n/ϕ ),rooted at any vertex; return.2. Delete e from G i . Update the pruned-out vertex set S i using Theorem 3.7.3. Let D newi denote the set of edges that were just removed from G i . That is, D newi contains e andall edges incident to vertices that were newly added into S i .4. For each e ∈ D newi , for every edge e ′ ∈ J i ( e ), call Delete ( i − , e ′ );5. If the total number of edge deletions from G (0) i exceeds ϕ | E ( G (0) i ) | /
10, call
InitializeExpander ( i + 1). Lemma 3.10
The total update time of the algorithm is O ( m /q ( γ ( n )) O ( q ) ) . Proof:
Fix an index 1 ≤ i ≤ q . We partition the execution of the algorithm into level- i stages , whereeach level- i stage starts when InitializeExpander ( i + 1) is called, and terminates just before thesubsequent call to InitializeExpander ( i + 1). Recall that, over the course of a level- i stage, at most ϕ | E ( G (0) i ) | /
10 edges are deleted from the graph G (0) i . We now bound the running time that is neededin order to initialize and maintain the level- i data structure over the course of a single level- i stage.This includes the following: • Computing expander G i and its ( l, η )-embedding P i − into G ′ i +1 using the algorithm from The-orem 3.8; the running time is ˜ O ( | E ( G i +1 ) | · γ ( m ) /ϕ ) = O (cid:0) m ( i +1) /q · ( γ ( n )) O (1) (cid:1) . • Initializing the lists J i +1 ( e ) for edges e ∈ G i +1 : the time to initialize all such lists is bounded bythe time needed to compute the embedding P i , which is in turn bounded by O (cid:0) m ( i +1) /q · ( γ ( n )) O (1) (cid:1) . • Initializing and maintaining the
ES-Tree T i +1 : the running time is O ( | E ( G i +1 ) | · log n/ϕ ) ≤ O (cid:0) m ( i +1) /q · ( γ ( n )) O (1) (cid:1) . • Running the algorithm for expander pruning on the expander G i ; from Theorem 3.7, the runningtime is ˜ O ( | E ( G (0) i ) /ϕ ) ≤ O (cid:0) m i/q · ( γ ( n )) O (1) (cid:1) , since the number of edge deletions is bounded by ϕ | E ( G (0) i ) | / • The remaining work, executed by
Delete ( i, e ), for every edge e that is deleted from graph G i (including edges incident to the vertices of the pruned out set S i ), requires O ( η ) time per edge,with total time O ( | E ( G (0) i ) | · η ) ≤ O (cid:0) m i/q · ( γ ( n )) O (1) (cid:1) .Therefore, the total time that is needed in order to initialize and maintain the level- i data structureover the course of a single level- i stage is O (cid:0) m ( i +1) /q · ( γ ( n )) O (1) (cid:1) Note that we did not include inthis running time the time required for maintaining level-( i −
1) data structures, that is, calls to
InitializeExpander ( i ) and Delete ( i − , e ).Next, we bound the total number of level- i stages. Consider some index 1 < i ′ ≤ q , and considera single level- i ′ stage. Recall that, over the course of this stage, at most d i ′ = ϕ | E ( G (0) i ′ ) | /
10 edgedeletions from graph G (0) i ′ may happen. From Theorem 3.7, over the course of the level- i ′ stage, thetotal volume of edges that are incident to the pruned-out vertices in S i is bounded O ( d i /ϕ ). As16 lgorithm 3 Query ( i, u, v ) where u, v ∈ V ( G i )1. If i = 1, return the unique u - v path in T .2. Compute, in T i , a unique path Q uu ′ connecting u to some vertex u ′ ∈ X i , and a unique path Q v ′ v connecting v to some vertex v ′ ∈ X i to v .3. If v ′ = u ′ , set R u ′ v ′ = ∅ ; otherwise set R u ′ v ′ = Query ( i − , u ′ , v ′ ).4. Let Q u ′ v ′ be a path in G i obtained by concatenating, for all edges e ′ ∈ R u ′ v ′ , the correspondingpath P ( e ′ ) ∈ P i from the embedding of G i − into G ′ i .5. Return Q uv = Q uu ′ ◦ Q u ′ v ′ ◦ Q v ′ v . (Note that for i = q , Q u,v is a path in graph G ′ q , that wasobtained from G q by subdividing its edges; it is immediate to turn it into a corresponding pathin G q .)the embedding P i ′ of G i ′ − into G ′ i has congestion at most η , this can cause at most O ( ηd i /ϕ ) edgedeletions from graph G (0) i ′ − . As a single level-( i ′ −
1) stage requires ϕ | E ( G (0) i ′ − ) | /
10 edge deletions from G (0) i ′ − , the number of level-( i ′ −
1) stages that are contained in a single level- i ′ stage is bounded by: O ( d i ′ · η/ϕ ) ϕ | E ( G (0) i ′ − ) | / ≤ O ( | E ( G (0) i ′ ) | · log m ) ϕ · | E ( G (0) i ′ − ) | ≤ O ( m /q · ( γ ( n )) O (1) ) . Since we only need to support at most ϕ | E ( G ) | /
10 edge deletions from the original graph G , there isonly a single level- q stage. Therefore, for every 1 ≤ i < q , the total number of level- i stages is boundedby: O ( m ( q − i ) /q · ( γ ( n )) O ( q − i ) ). We conclude that the total amount of time required for maintaininglevel- i data structure is bounded by: O (cid:16) m ( i +1) /q · ( γ ( n )) O (1) (cid:17) · O (cid:16) m ( q − i ) /q · ( γ ( n )) O ( q − i ) (cid:17) ≤ O (cid:16) m /q · ( γ ( n )) O ( q − i ) (cid:17) . Summing this up over all 1 ≤ i ≤ q , we get that the total update time of the algorithm is O (cid:0) m /q · ( γ ( n )) O ( q ) (cid:1) ,as required.Next, we provide an algorithm for responding to the short-path query between a pair u, v of ver-tices. The algorithm calls Query ( q, u, v ), that is described in Algorithm 3, which recursively calls Query ( i, u ′ , v ′ ) for i < q . The idea of the algorithm is simple: we use the ES-Tree T q in graph G q inorder to compute two paths: one path connecting u to some vertex u ′ ∈ X q , and one path connecting v to some vertex v ′ ∈ X q , and then recursively call the short-path query for the pair u ′ , v ′ of verticesin the expander G q − ; we then use the embedding P q of G q − into G q to convert the resulting pathinto a u ′ - v ′ path in G q . The final path connecting u to v is obtained by concatenating the resultingthree paths.The following lemma summarizes the guarantees of the algorithm for processing short-path queries. Lemma 3.11
Given any pair of vertices u, v ∈ V ( G ) \ S , algorithm Query ( q, u, v ) returns a (possiblynon-simple) u - v path Q in G \ S , of length ( γ ( n )) O ( q ) , in time O ( | Q | ) . Proof:
Let Len( i ) be the maximum length of the path in G i returned by Query ( i, u, v ). As G i is alwaysa ϕ/ G i is O ( ϕ − log n ), and17o the ES-Tree tree T i spans graph G i . Consider Algorithm 3. Let Q uu ′ and Q v ′ v be the path in G ′ i from u to u ′ ∈ X i − and the path in G ′ i from v ′ ∈ X i − to v . As T i spans G ′ i , Q uu ′ and Q v ′ v do exist.Let R u ′ v ′ = Query ( i − , u ′ , v ′ ) where | R u ′ v ′ | ≤ Len( i − Q u ′ v ′ be obtained by concatenatingpath( e ′ ) for each e ′ ∈ R u ′ v ′ where path( e ′ ) ∈ P i is the corresponding embedding path of e ′ . We have | Q u ′ v ′ | ≤ ℓ · | R u ′ v ′ | . It is clear that the concatenation Q uu ′ ◦ Q u ′ v ′ ◦ Q v ′ v is indeed a u - v path in G ′ i andhence in G i . The length of this path is at mostLen( i ) = O ( ϕ − log n ) + O ( ϕ − log n ) · Len( i − . Solving the recursion gives us Len( i ) = ( γ ( n )) O ( i ) . So Query ( q, u, v ) returns a u - v path of length( γ ( n )) O ( q ) . Observe that the query time is proportional to the number of edges on the returned path.Lastly, observe that a path Q connecting the given pair u, v of vertices, that is returned by algorithm Query ( q, u, v ) may not be simple. It is easy to turn Q into a simple path Q ′ , in time O ( | Q | ), byremoving all cycles from Q . The final path Q ′ is guaranteed to be simple, of length ( γ ( n )) O ( q ) , andthe query time is bounded by ( γ ( n )) O ( q ) , as required. In this subsection we focus on a single layer Λ j , for some 1 < j ≤ r . Recall that we have denoted by n ≤ j the number of vertices that belonged to set Λ ≤ j at the beginning of the algorithm, before any edgeswere deleted from the input graph. We let L j be the smallest integer ℓ , such that n ≤ j / ℓ − ≤ h j / L j ≤ log n . We further partition vertex set Λ j into subsets Λ j, , Λ j, , . . . , Λ j,L j . We referto each resulting vertex set Λ j,ℓ as sublayer ( j, ℓ ). For indices 1 ≤ ℓ ≤ ℓ ′ ≤ L j , we say that sublayerΛ j,ℓ is above sublayer Λ j,ℓ ′ . The last sublayer Λ j,L j , that we also denote for convenience by Λ − j , iscalled the buffer sublayer. For convenience, we also use the shorthand Λ j, ≤ ℓ = S ℓℓ ′ =1 Λ j,ℓ ′ , and wedefine Λ j,<ℓ , Λ j, ≥ ℓ , Λ j,>ℓ similarly.We will ensure that throughout the algorithm, the following invariant always holds:I1. for all 1 ≤ ℓ ≤ L j , | Λ j, ≥ ℓ | ≤ n ≤ j / ℓ − .At the beginning of the algorithm, we set Λ j, = Λ j and Λ j,ℓ ′ = ∅ for all 1 < ℓ ′ ≤ L j . Considernow some sublayer Λ j,ℓ , for ℓ < L j . We partition the execution of our algorithm into phases withrespect to this sublayer, that we refer to as level- ( j, ℓ ) phases , or ( j, ℓ ) -phases . Whenever Invariant I1for the next sublayer Λ j,ℓ +1 is violated (that is, | Λ j, ≥ ℓ +1 | exceeds n ≤ j / ℓ ), we terminate the current( j, ℓ )-phase and start a new phase.Consider now some time t during the execution of the algorithm, when, for some 1 ≤ ℓ < L j , a( j, ℓ )-phase is terminated. Let ℓ ′ be the smallest index for which the ( j, ℓ ′ )-phase is terminated at time t . We then set Λ j,ℓ ′ = Λ j, ≥ ℓ ′ , and for all ℓ ′ < ℓ ′′ ≤ L j , we set Λ j,ℓ ′′ = ∅ .Throughout the algorithm, for every vertex u , we denote by deg ≤ ( j,ℓ ) ( u ) = | E ( u, Λ
For all ≤ ℓ < L j , for every vertex u , at the beginning of each ( j, ℓ ) -phase, deg ≤ ( j,ℓ ) ( u ) = deg ≤ j ( u ) . H j,ℓ = G [Λ j,ℓ ] be the subgraph of G induced by the vertices of the sublayer Λ j,ℓ . We refer to H j,ℓ as the level- ( j, ℓ ) graph . We will also ensure that throughout the algorithm the following invariantholds:I2. For all 1 ≤ ℓ < L j , for each level-( j, ℓ ) phase, graph H j,ℓ only undergoes deletions of edges andvertices over the course of the phase.Therefore, we say that graph H j,ℓ is decremental within each ( j, ℓ )-phase. Note that the graph H j,L j that corresponds to the buffer sublayer Λ − j may undergo both insertions and deletions of edges andvertices. As time progresses, some vertices v whose virtual degree g deg( v ) was greater than h j mayhave their virtual degree decrease to h j . In order to preserve the above invariant, we always insertsuch vertices v into the buffer sublayer Λ − j = Λ j,L j ; additional vertices may also be moved from highersub-layers Λ j,ℓ to the buffer sub-layer over the course of a ( j, ℓ )-phase. Consider now some non-buffer sub-layer Λ j,ℓ , with ℓ < L j . At the beginning of every ( j, ℓ )-phase,if Λ j,ℓ = ∅ , we compute a core decomposition of the graph H j,ℓ . This is one of the key subroutinesin our LCD data structure. The following theorem provides the algorithm for computing the coredecomposition of a sub-layer.
Theorem 3.13 (Core Decomposition of Sublayer)
There is a deterministic algorithm, that, givena level- ( j, ℓ ) graph H j,ℓ = G [Λ j,ℓ ] , computes a collection F j,ℓ of vertex-disjoint subgraphs of H j,ℓ , called cores , such that each core K ∈ F j,ℓ is a ϕ -expander, and for every vertex u ∈ V ( K ) , deg K ( u ) ≥ ϕ · deg ≤ ( j,ℓ ) ( u ) / . Moreover, if we denote by U j,ℓ = Λ j,ℓ \ (cid:16)S K ∈F j,ℓ V ( K ) (cid:17) the set of all vertices of Λ j,ℓ that do not belong to any core, then there is an orientation of the edges of the graph G [ U j,ℓ ] , suchthat the resulting directed graph D j,ℓ is a directed acyclic graph (DAG), and, for every vertex u ∈ U j,ℓ , in - deg D j,ℓ ( u ) ≤ deg ≤ ( j,ℓ ) ( u ) / . The running time of the algorithm is b O ( | E ( H j,ℓ ) | ) . Proof:
We use the following lemma, whose proof follows easily from Theorem 3.6.
Lemma 3.14
There is a deterministic algorithm, that given a subgraph H ′ j,ℓ ⊆ H j,ℓ , such that everyvertex u ∈ V ( H ′ j,ℓ ) has degree at least deg ≤ ( j,ℓ ) ( u ) / in H ′ j,ℓ , in time b O ( | E ( H j,ℓ ) | ) computes a collec-tion F of vertex-disjoint subgraphs of H ′ j,ℓ called cores, such that each core K ∈ F a ϕ -expander and,for every vertex u ∈ V ( K ) , deg K ( u ) ≥ ϕ deg ≤ ( j,ℓ ) ( u ) / . Moreover, P K ∈F | E ( K ) | ≥ | E ( H ′ j,ℓ ) | / . Proof:
We apply Theorem 3.6 to every connected component of graph H ′ j,ℓ , with parameter ϕ .Let ( V , . . . , V k ) be the resulting partition of V ( H ′ j,ℓ ). For each 1 ≤ i ≤ k , we the define a core K i = H ′ j,ℓ [ V i ]. Observe first that P ki =1 δ ( V i ) ≤ γ · ϕ | E ( H ′ j,ℓ ) | ≤ | E ( H ′ j,ℓ ) | / ϕ .Therefore, P ki =1 | E ( K i ) | ≥ | E ( H ′ j,ℓ ) | /
4. We are also guaranteed that for all 1 ≤ i ≤ k , graph K i is a strong ϕ -expander with respect to H ′ j,ℓ . Lastly, if K i contains more than one vertex, then, fromObservation 2.1, every vertex u of K i has degree at least ϕ deg H ′ j,ℓ ( u ) ≥ ϕ deg ≤ ( j,ℓ ) ( u ) /
12 in K i . Wereturn a set F containing all graphs K i with | V ( K i ) | >
1. The running time of the algorithm is b O ( | E ( H ′ j,ℓ ) | ) by Theorem 3.6.We are now ready to complete the proof of Theorem 3.13. Our algorithm is iterative. We start with F j,ℓ ← ∅ and H ′ j,ℓ ← H j,ℓ . Consider the following trimming process similar to the one in [KT19]:while there is a vertex u ∈ V ( H ′ j,ℓ ) with deg H ′ j,ℓ ( u ) < deg ≤ ( j,ℓ ) ( u ) /
12, delete u from H ′ j,ℓ . We say that19raph H ′ j,ℓ is trimmed if, for all u ∈ V ( H ′ j,ℓ ) deg H ′ j,ℓ ( u ) ≥ deg ≤ ( j,ℓ ) ( u ) /
12. While H ′ j,ℓ = ∅ , we performiterations, each of which consists of the following steps:1. Trim the current graph H ′ j,ℓ ;2. Apply the algorithm from Lemma 3.14 to graph H ′ j,ℓ , to obtain a collection F of cores;3. For all K ∈ F , delete all vertices of K from H ′ j,ℓ ;4. Set F j,ℓ ← F j,ℓ ∪ F .Note that, throughout the algorithm, the graph H ′ j,ℓ that serves as input to Lemma 3.14 has vertexdegrees at least deg ≤ ( j,ℓ ) ( u ) /
12 due to the trimming operation, so it is a valid input to the lemma. Itis also immediate to see that the number of iterations is bounded by O (log n ). Indeed, let H be thegraph H ′ j,ℓ at the beginning of some iteration, and let H ′ be the graph H ′ j,ℓ at the beginning of thenext iteration. Note that H ′ is a subgraph of H \ ( S K ∈F E ( K )). As | S K ∈F E ( K ) | ≥ | E ( H ) | /
4, weconclude that | E ( H ′ ) | ≤ | E ( H ) | /
4. Therefore, after O (log n ) iterations, H ′ j,ℓ becomes empty and thealgorithm terminates. It is easy to see that the trimming step takes time O ( | E ( H ′ j,ℓ ) | ). Therefore, thetotal running time of the algorithm is b O ( | E ( H j,ℓ ) | ).Consider now the final set F j,ℓ of cores computed by the algorithm. We now show that it has allrequired properties. From Lemma 3.14, we are guaranteed that each core K ∈ F j,ℓ is a ϕ -expander,and that for every vertex u ∈ V ( K ), deg K ( u ) ≥ ϕ · deg ≤ ( j,ℓ ) ( u ) /
12. Observe that every vertex in set U j,ℓ = V ( H j,ℓ \ S K ∈F j,ℓ K ) was deleted from graph H ′ j,ℓ by the trimming procedure at some time t inthe algorithm’s execution. Therefore, at time t , deg H ′ j,ℓ ( u ) < deg ≤ ( j,ℓ ) ( u ) /
12 held. We orient all edgesthat belonged to graph H ′ j,ℓ at time t and are incident to u towards u . This provides an orientationof all edges in graph G [ U j,ℓ ], which in turn defines a directed graph D j,ℓ . From the above discussion,for every vertex of D j,ℓ , in-deg D j,ℓ ( u ) < deg ≤ ( j,ℓ ) ( u ) /
12 holds. Moreover, it is easy to see that graph D j,ℓ is a DAG, because the order in which the vertices of U j,ℓ were deleted from H ′ j,ℓ by the trimmingprocedure defines a valid topological ordering of the vertices of D j,ℓ . Short-Core-Path
Queries
In this subsection, we describe an algorithm for maintaining the cores, and for supporting queries
Short-Core-Path ( K, u, v ): given any pair of vertices u and v , both of which lie in some core K ∈ S j F j ,return a simple u - v path P in K of length at most ( γ ( n )) O ( q ) = b O (1), in time ( γ ( n )) O ( q ) = b O (1).When we invoke the algorithm from Theorem 3.13 for computing a core decomposition of sublayer Λ j,ℓ at the beginning of a ( j, ℓ )-phase, we say that the core decomposition creates the cores in the set F j,ℓ that it computes. Our algorithm only creates new cores through the algorithm from Theorem 3.13,which may only be invoked at the beginning of a ( j, ℓ )-phase.Throughout the algorithm, we denote F j = F j, ∪ · · · ∪ F j,L j − , and we refer to graphs in F j as coresfor layer Λ j , or cores for graph H j (recall that we have defined H j = G [Λ j ]). For convenience, we alsouse shorthand notation F j, ≤ ℓ = F j, ∪ · · · ∪ F j,ℓ and F ≤ j = F ∪ · · · ∪ F j . We define F ≥ j and F j, ≥ ℓ analogously. Let ˆ K j = S K ∈F j K , and denote ˆ K ≤ j = S K ∈F ≤ j K . We define notation ˆ K ≥ j , ˆ K j, ≤ ℓ andˆ K j, ≥ ℓ analogously.In order to maintain the cores and to support the Short-Core-Path ( K, u, v ) queries for each such core K , we do the following. Consider a pair 1 ≤ j ≤ r , 1 ≤ ℓ < L j of indices, and some core K ∈ F j,ℓ , thatwas created when the core decomposition algorithm from Theorem 3.13 was invoked for layer Λ j,ℓ , at20he beginning of some ( j, ℓ )-phase. Let K (0) denote the core K right after it is created, before anyedges are deleted from K ; recall that K (0) is a ϕ -expander. We use the algorithm from Theorem 3.9 ongraph K (0) , as it undergoes edge deletions, with the parameter q that serves as input to Theorem 3.4,to maintain the vertex set S K ⊆ V ( K (0) ). Whenever, over the course of the current ( j, ℓ )-phase, anedge is deleted from graph G that belongs to K (0) , we add this edge to the sequence of edge deletionsfrom graph K (0) , and update the set S K of vertices using the algorithm from Theorem 3.9 accordingly.At any point in the current ( j, ℓ )-phase, if A K ⊆ E ( K (0) ) is the set of edges of K (0) that were deletedfrom G so far, and S K is the current vertex set maintained by the algorithm from Theorem 3.9, thenwe set the current core corresponding to K (0) to be the graph obtained from K by deleting the edgesof A K and the vertices of S K from it; in other words, K = ( K (0) \ A K ) \ S k . We refer to the resultinggraph K as a core throughout the phase. Whenever the number of deleted edges in A K exceeds ϕ | E ( K (0) ) | /
10, we set S K = V ( K (0) ), which effectively set K = ∅ ; at this point we say that core K is destroyed . Each destroyed core is removed from F j,ℓ .From this definition of the core K , from the time it is created and until it is destroyed, it may onlyundergo deletions of edges and vertices. In addition to the deletion of edges of K due to the edgedeletions from the input graph G , we also delete vertices of S K from K . Whenever a vertex v ∈ V ( K )is deleted from K (that is, v is added to S K ), we say that v is pruned out of K . When there are morethan ϕ | E ( K (0) ) | /
10 edge deletions in A K , all vertices of K are pruned out and so K is destroyed.Therefore, we can now use Theorem 3.9 in order to support queries Short-Core-Path ( K, u, v ) for eachcore K : given a pair u, v ∈ V ( K ) of vertices of K , return a simple u - v path P in K of length at most( γ ( m )) O ( q ) in time ( γ ( m )) O ( q ) . We now provide a simple observation about the maintained cores. Proposition 3.15
For every core K , from the time K is created and until it is destroyed, | V ( K ) | ≥ Ω( ϕ h j ) holds. Proof:
By Theorem 3.9, K is a strong ϕ/ K (0) . Let u ∈ V ( K ) be a vertex minimizingdeg K (0) ( u ). In particular, deg K (0) ( u ) ≤ vol K (0) ( V ( K )) / K ( u ) ≥ ( ϕ/ · deg K (0) ( u ). Since, at the beginning of the ( j, ℓ )-phasedeg K (0) ( u ) ≥ ϕ deg ≤ ( j,ℓ ) ( u ) /
12 by Theorem 3.13= ϕ deg ≤ j ( u ) /
12 by Observation 3.12 ≥ ϕh j /
12 by Observation 3.1,held we conclude that | V ( K ) | ≥ deg K ( u ) = Ω( ϕ h j ) (using the fact that the graph is simple).We use the following observation in order to bound the number of cores at any point during thealgorithm’s execution . Later in Section 3.7, we will give another bound for the total number of cores ever created by the algorithm. Observation 3.16
For all ≤ j ≤ r and ≤ ℓ < L j , at any time over the course of the algorithm, |F j,ℓ | ≤ O ( | Λ j,ℓ | / ( ϕ h j )) , and |F ≤ j | ≤ O ( n ≤ j / ( ϕ h j )) must hold. Moreover, if C is a connectedcomponent of G [Λ ≤ j ] , and F C ≤ j = { K ∈ F ≤ j | K ⊆ C } is the collection of cores in F ≤ j that arecontained in C , then |F C ≤ j | ≤ O ( | V ( C ) | / ( ϕ h j )) . Proof:
Consider a set F j,ℓ of remaining cores in sublayer Λ j,ℓ which is not destroyed yet. Noteagain that new cores in F j,ℓ may only be created when the algorithm from Theorem 3.13 is employedon sublayer Λ j,ℓ . In the beginning of a ( j, ℓ )-phase, all cores are vertex disjoint by Theorem 3.13.Moreover, each core undergoes deletions only so it remains disjoint and it contains Ω( ϕ h j ) verticesat any point of time before it is destroyed by Proposition 3.15. So we the number of cores is at most21 F j,ℓ | = O ( | Λ j,ℓ | / ( ϕ h j )) at any point of time. By summing up the above bound over all sublayersin layers 1 , . . . , j , and noting that h , h , . . . , h j form a geometrically decreasing sequence, and that | Λ ≤ j | ≤ n ≤ j holds at all times, we get that |F ≤ j | ≤ O ( n ≤ j / ( ϕ h j )).Lastly, consider some connected component C of G [Λ ≤ j ], and let F C ≤ j = { K ∈ F ≤ j | K ⊆ C } .For an index 1 ≤ ℓ < L j , let F Cj,ℓ = { K ∈ F j,ℓ | K ⊆ C } . Using the same argument, |F Cj,ℓ | ≤ O ( | Λ j,ℓ ∩ V ( C ) | / ( ϕ h j )). By summing up over all sub-layers of layers 1 , . . . , j , we conclude that |F C ≤ j | ≤ O ( | V ( C ) | / ( ϕ h j )). In this subsection we provide additional details regarding the sub-layers of each layer Λ j , and inparticular we describe how vertices move between the sublayers. Throughout this subsection, we fixan index 1 ≤ j ≤ r .Consider an index 1 ≤ ℓ < L j . Throughout the algorithm, we maintain a partition of the vertices ofthe (non-buffer) sub-layer Λ j,ℓ into two subsets: set ˆ K j,ℓ that contains all vertices currently lying in thecores of F j,ℓ , so ˆ K j,ℓ = S K ∈F j,ℓ V ( K ), and set U j,ℓ containing all remaining vertices of Λ j,ℓ . (We notethat previously, we defined ˆ K j,ℓ to denote the graph S K ∈F j,ℓ K ; we slightly abuse the notation hereby letting ˆ K j,ℓ denote the set of vertices of this graph). For every vertex u ∈ Λ j,ℓ , let deg (0) ≤ ( j,ℓ ) ( u ) anddeg (0) ≤ j ( u ) denote deg ≤ ( j,ℓ ) ( u ) and deg ≤ j ( u ) at the beginning of the current ( j, ℓ )-phase, respectively.Recall that, from Observation 3.12, deg (0) ≤ ( j,ℓ ) ( u ) = deg (0) ≤ j ( u ). We maintain the following invariant:I3. For every vertex u ∈ U j,ℓ , deg ≤ ( j,ℓ ) ( u ) ≥ deg (0) ≤ ( j,ℓ ) ( u ) / j, ℓ )-phase.We now consider the buffer sublayer Λ − j . The vertices of the buffer sublayer are partitioned into threedisjoint subsets: ˆ K − j , U − j , and D − j , that are defined as follows. First, for all 1 ≤ ℓ < L j , wheneverany vertex u is pruned out of any core K ∈ F j,ℓ by the core pruning algorithm from Theorem 3.7 overthe course of the current ( j, ℓ )-phase, vertex u is deleted from sublayer Λ j,ℓ and is added to the buffersublayer Λ − j , where it joins the set ˆ K − j (recall that, once the current ( j, ℓ )-phase terminates, we setΛ − j = ∅ ). Additionally, whenever Invariant I3 is violated for any vertex u ∈ U j,ℓ , we delete u fromΛ j,ℓ , and add it to Λ − j , where it joins the set U − j . Lastly, for all j ′ < j , whenever a vertex u ∈ Λ j ′ hasits virtual degree decrease from h j ′ to h j , vertex u is added to layer Λ j , into the buffer sublayer Λ − j ,where it joins the set D − j . Similarly, whenever a vertex u ∈ Λ j has its virtual degree decrease below h j , we delete it from Λ j and move it to the appropriate layer, where it joints the corresponding buffersub-layer.Whenever a vertex is added to the buffer sublayer Λ − j , we say that a move into Λ − j occurs. Thefollowing lemma, that is key to the analysis of the algorithm, bounds the number of such moves.Recall that we used n ≤ j to denote the number of vertices in Λ ≤ j at the beginning of the algorithm,before any edges are deleted from G . Lemma 3.17
For all ≤ j ≤ r , the total number of moves into Λ − j over the course of the entirealgorithm is at most O ( n ≤ j ∆ /ϕ ) = b O ( n ≤ j ∆) . We defer the proof of the lemma to Section 3.8, after we show, in Section 3.7, several immediate usefulconsequences of the lemma. 22 .7 Bounding the Number of Phases and the Number of Cores
In this subsection we use Lemma 3.17 to bound the number of phases and the number of cores foreach sublayer Λ j,ℓ . Recall that in Section 3.5 we have described an algorithm for maintaining eachcore K ∈ F j,ℓ over the course of a ( j, ℓ )-phase. The set F j,ℓ of cores is initialized using the CoreDecomposition algorithm from Theorem 3.13, at the beginning of a ( j, ℓ )-phase. After that, every core K ∈ F j,ℓ only undergoes edge and vertex deletions, over the course of the ( j, ℓ )-phase. Once K = ∅ holds, or a new ( j, ℓ )-phase starts, we say that the core is destroyed . Recall that F j = F j, ∪· · ·∪F j,L j − denotes the set of all cores in Λ j (we do not perform core decomposition in the buffer sublayer Λ j,L j ).In this section, using Lemma 3.17, we bound the total number of cores in Λ j that are ever createdover the course of the algorithm, and also the number of ( j, ℓ )-phases, for all 1 ≤ ℓ < L j , in the nexttwo lemmas. Lemma 3.18
For all ≤ j ≤ r and ≤ ℓ < L j , the total number of ( j, ℓ ) -phases over the course ofthe algorithm is at most b O (2 ℓ ∆) . Proof:
Fix a pair of indices 1 ≤ j ≤ r , 1 ≤ ℓ < L j . Recall that we start a new ( j, ℓ )-phase only whenInvariant I1 is violated for sublayer ( j, ℓ + 1), that is, | Λ j, ≥ ℓ +1 | > n ≤ j / ℓ holds. At the beginning of a( j, ℓ )-phase, we set Λ j,ℓ ′ = ∅ for all ℓ ′ > ℓ , and so in particular, Λ j, ≥ ℓ +1 = ∅ . The only way that newvertices are added to set Λ j, ≥ ℓ +1 is when new vertices join the buffer layer Λ − j , that is, via a move intothe buffer sublayer. Therefore, at least n ≤ j / ℓ moves into the buffer sublayer Λ − j are required beforethe current ( j, ℓ )-phase terminates. Since, from Lemma 3.17, the total number of moves into Λ − j isbounded by b O ( n ≤ j ∆), the total number of ( j, ℓ )-phases is bounded by b O (2 ℓ ∆).Lastly, we bound the total number of cores in F j that are ever created over the course of the algorithmin the following lemma. Lemma 3.19
The total number of cores created in layer Λ j over the course of the entire algorithm isat most b O ( n ≤ j · ∆ /h j ) . Proof:
Consider an index 1 ≤ ℓ < L . By Observation 3.16, at the beginning of every ( j, ℓ )-phase, |F j,ℓ | ≤ O ( | Λ j,ℓ | / ( ϕ h j )) = b O ( n ≤ j / (2 ℓ h j )) holds (we have used Invariant I1 to bound | Λ j,ℓ | by n ≤ j / ℓ − ). Since the total number of ( j, ℓ )-phases over the course of the algorithm is bounded by b O (2 ℓ ∆), the total number of cores that are ever created in sublayer Λ j,ℓ is bounded by b O ( n ≤ j ∆ /h j ).The claim follows by summing over all L j − O (log n ) sublayers. The goal of this subsection is to prove Lemma 3.17. Throughout this subsection, we fix an index1 ≤ j ≤ r . Our goal is to prove that the total number of moves into the buffer sublayer Λ − j over thecourse of the entire algorithm is bounded by b O ( n ≤ j ∆). We partition all moves into the buffer sublayerΛ − j into three types. A move of a vertex u into sublayer Λ − j is of type- D , if u is added to set D − j ; itis of type- K , if u is added to K − j ; and it is of type- U if u is added to set U − j . We now bound thenumber of moves of each type separately. Type- D Moves.
Recall that a vertex u is added to D − j only if its virtual degree decreases fromsome value h ′ j for j ′ < j to h j . Since virtual degrees of all vertices only decrease, such a vertex mustlie in Λ ≤ j at the beginning of the algorithm, and each such vertex u may only be added to set D − j once23ver the course of the algorithm. Therefore, the total number of type- D moves into Λ − j is bounded by n ≤ j . Type- K Moves.
To bound the number of type- K moves, it is convenient to assign types to edgedeletions. Consider an index 1 ≤ ℓ < L j and the corresponding graph H j,ℓ = G [Λ j,ℓ ]. Let e be an edgedeleted from H j,ℓ . We assign to the edge e one of the following four deletion types. • If e is deleted by the adversary (that is, e is deleted as part of the deletion sequence of the inputgraph G ), then this deletion is of type- A ; • If e is deleted from H j,ℓ because the virtual degree of one of its endpoints falls below h j (and sothat endpoint is deleted from Λ j ), then this deletion is of type- D ; • If an endpoint of e belonged to some core K ∈ F j,ℓ , and is then pruned out of that core (and sothat endpoint is removed from Λ j,ℓ and added to K − j ), then the deletion of edge e is of type- K ; • Lastly, if an endpoint u of e lies in u ∈ U j,ℓ , and Invariant I3 stops holding for u , that is,deg ≤ ( j,ℓ ) ( u ) < deg (0) ≤ ( j,ℓ ) ( u ) / u is removed from Λ j,ℓ and added to U − j ), then thedeletion of e is of type- U .Observe that every edge deletion from a graph H j,ℓ executed over the course of a ( j, ℓ )-phase mustfall under one of these four categories. As the algorithm progresses, the same vertex may be addedto and deleted from sublayer Λ j,ℓ multiple times. Therefore, an edge that is deleted from H j,ℓ maybe re-added to the new graph H j,ℓ at the beginning of one of the subsequent ( j, ℓ ) phases. Next, webound the total number of edge deletions from all graphs H j,ℓ over the course of the entire algorithm,for the first three types. Notice that these edge deletions ignore the deletions of edges whose endpointsbelong to different sublayers.The following simple observation bounds the number of type- A and type- D deletions. Observation 3.20
The total number of type- A and type- D deletions from all graphs H j,ℓ for all ≤ ℓ < L j , over all ( j, ℓ ) -phases is bounded by n ≤ j h j ∆ . Proof:
Observe that each type- A deletion corresponds to a deletion of an edge whose both endpointsare contained in Λ j from the input graph G ; each such edge may only be deleted once over the courseof the algorithm. From Observation 3.3, the total number of such edges is bounded by n ≤ j h j ∆.If an edge e is deleted in a type- D deletion from some graph H j,ℓ , then both its endpoints lie in Λ j , and,after the deletion, one of the endpoints of e is removed from layer Λ j forever. Therefore, every edgemay be deleted at most once in a type- D deletion, and the number of all such deletions is bounded bythe total number of edges whose both endpoints are contained in Λ j over the course of the algorithm,which is again bounded by n ≤ j h j ∆ from Observation 3.3.We now proceed to bound the total number of type- K edge deletions. Lemma 3.21
The total numbers of type- K edge deletions from all graphs H j,ℓ for all ≤ ℓ < L j ,over all ( j, ℓ ) -phases is bounded by O ( n ≤ j h j ∆ /ϕ ) . Proof:
Consider an index 1 ≤ ℓ < L j , and some ( j, ℓ )-phase. Let H (0) j,ℓ denote the graph H j,ℓ at thebeginning of the ( j, ℓ )-phase, and let K (0) denote an arbitrary core K ∈ F j,ℓ at the beginning of thatphase. Let S K be the set of vertices that are pruned out of K (0) by Theorem 3.9. By the definition24f type- K deletion, it is enough to bound vol H (0) j,ℓ ( S K ), summing over cores K created in H j,ℓ over thecourse of the algorithm.In order to bound vol H (0) j,ℓ ( S K ) for a single core K ∈ F j,ℓ , recall that, from Theorem 3.13, for everyvertex u ∈ V ( K (0) ) deg K (0) ( u ) ≥ ϕ deg H (0) j,ℓ ( u ) /
12 holds. Therefore, vol H (0) j,ℓ ( S K ) ≤ · vol K (0) ( S K ) /ϕ .Moreover, by Theorem 3.9, after t edge deletions from K (0) (that include type- A and type- D dele-tions, but exclude type- U deletions, as edges deleted this way must lie outside of the core), we havevol K (0) ( S K ) ≤ O ( t/ϕ ). From Observation 3.20, the total number of type- A and type- D edge deletions,in all graphs H j,ℓ , for all 1 ≤ ℓ < L j and all ( j, ℓ )-phases, is at most n ≤ j h j ∆. We conclude that thesum of all volumes vol H (0) j,ℓ ( S K ), over all 1 ≤ ℓ < L j , and over all cores ever created in F j,ℓ , is boundedby: ϕ · O ( n ≤ j h j ∆ /ϕ ) = O ( n ≤ j h j ∆ /ϕ ). Corollary 3.22
The total number of type- K moves into Λ − j over the course of the algorithm is boundedby O ( n ≤ j ∆ /ϕ ) . Proof:
Consider some sublayer Λ ℓ,j , and the graph H (0) j,ℓ at the beginning of some ( j, ℓ )-phase. Let K (0) ∈ F j,ℓ be some core that was created at the beginning of that phase, and let u ∈ V ( K (0) ) beany vertex of the core. Recall that, from Theorem 3.13, deg K (0) ( u ) ≥ ϕ deg (0) ≤ ( j,ℓ ) ( u ) /
12, and fromObservation 3.12, deg (0) ≤ ( j,ℓ ) ( u ) = deg (0) ≤ j ( u ) ≥ h j . Therefore, deg K (0) ( u ) ≥ ϕh j /
12. If vertex u is movedto Λ − j via a type- K move, then each of the Ω( ϕh j ) edges of K (0) incident to u must have been deletedas part of A , D , or K -type deletion from H j,ℓ . Since the total number of all deletions of types A , D and K , from all graphs H j,ℓ for 1 ≤ ℓ < L j , over the course of the whole algorithm, is boundedby O ( n ≤ j h j ∆ /ϕ ), we get that the total number of K -type moves into the buffer layer Λ − j over thecourse of the entire algorithm is bounded by O ( n ≤ j ∆ /ϕ ). Type- U Moves.
We further partition type- U moves into two subtypes. Consider a time in thealgorithm’s execution, when some vertex u is added to set U − j , so it is added to Λ − j via a U -move, andassume that u was moved from sublayer Λ j,ℓ . We say that this move is of type- U if | E G ( u, Λ j,>ℓ ) | < ≤ ( j,ℓ ) ( u ) held right before u is moved, and we say that it is of type- U otherwise. We bound thenumber of moves of both subtypes separately.We first bound the number of type- U moves. Recall that, whenever u is type- U moved:deg ≤ j ( u ) = deg ≤ ( j,ℓ ) ( u ) + | E G ( u, Λ j,>ℓ ) | < ≤ ( j,ℓ ) ( u ) < (0) ≤ ( j,ℓ ) ( u ) /
4= 3 deg (0) ≤ j ( u ) / u is moved to Λ − j , and Observation 3.12 forthe last equality.) Therefore, the number of neighbors of u in Λ ≤ j has reduced by a constant factorcompared to the beginning of the current ( j, ℓ )-phase. Note that deg ≤ j ( u ) may never increase, asvirtual degrees of vertices may only decrease. Therefore, a vertex may be moved to Λ − j via a type- U move at most O (log n ) times. The total number of type- U moves into Λ − j over the course of the entirealgorithm is then bounded by O ( n ≤ j log n ).It remains to bound the the total number of type- U moves into Λ − j . We will show that the totalnumber of such moves is bounded by O ( n ≤ j ∆ /ϕ ) over the course of the algorithm. For all 1 ≤ ℓ < L j ,25e define an edge set Π j,ℓ , that contains all edges e = ( u, v ) with u ∈ Λ j,ℓ and v ∈ Λ j,>ℓ . Intuitively,set Π j,ℓ contains all “downward edges” from vertices in Λ j,ℓ that lie within the layer j . Note thatΠ j,L j = ∅ . We first bound the total number of edges that ever belonged to each such set Π j,ℓ over thecourse of the algorithm. (Note that an edge may be added several times to Π j,ℓ over the course of thealgorithm; we count them as separate edges). We will then use this bound in order to bound the totalnumber of the U -moves.Note that a new edge e = ( u, v ) may only be added to set Π j,ℓ in the following cases: • (Type- D addition): when some vertex v ∈ Λ
Xj,ℓ . Clearly, inc j,ℓ = P X ∈{ D,K,U ,U } inc Xj,ℓ . For convenience, for all X ∈ { D, K, U , U } , we denote by Π Xj = S ≤ ℓ When a vertex u is moved to Λ − j , its contribution to inc j is at most | E ( u, Λ j ) | . By Observa-tion 3.3, the total number of edges e , such that, at any point during the algorithm’s execution, bothendpoints of e were contained Λ j , is at most n ≤ j h j ∆. As each vertex u can moved to Λ j − in a type- D move only once, inc Dj ≤ O ( n ≤ j h j ∆) must hold. Similarly, as each vertex u can be moved to Λ − j in atype- U move at most O (log n ) times, inc U j ≤ O ( n ≤ j h j ∆ log n ) must hold.Next, observe that inc Kj,ℓ is precisely the total number of type- K edge deletions from graphs H j, , . . . , H j,ℓ over the course of the algorithm. By Lemma 3.21, we can bound inc Kj ≤ O ( n ≤ j h j ∆ /ϕ ).We use the next lemma to bound inc U j,ℓ . Lemma 3.24 For all ≤ ℓ < L j , inc U j,ℓ ≤ inc Dj,ℓ + inc Kj,ℓ + inc U j,ℓ . Proof: Fix an index 1 ≤ ℓ < L j . We denote by ˆΠ j,ℓ the set of all edges that were ever present in setΠ j,ℓ over the course of the algorithm, and by ˆΠ U j,ℓ the set of all edges that were ever present in set Π U j,ℓ over the course of the entire algorithm; recall that ˆΠ U j,ℓ ⊆ ˆΠ j,ℓ . We next show that | ˆΠ U j,ℓ | ≤ | ˆΠ j,ℓ | / e ∈ ˆΠ U j,ℓ , two edges e , e ∈ ˆΠ j,ℓ that are responsible for e .We will ensure that every edge in ˆΠ j,ℓ is responsible for at most one edge in ˆΠ U j,ℓ . This will immediatelyimply that | ˆΠ U j,ℓ | ≤ | ˆΠ j,ℓ | / j, ℓ )-phase, and some vertex u ∈ Λ j,ℓ , that is moved to Λ − j via a U -move sometime during the ( j, ℓ )-phase. At the beginning of the ( j, ℓ )-phase, Λ j,ℓ ′ = ∅ held for all ℓ ′ > ℓ . At thetime when u is moved to Λ − j , from the definition of a U -move, | E G ( u, Λ j,>ℓ ) | ≥ ≤ ( j,ℓ ) ( u ) held.The edges that are added to ˆΠ U j,ℓ due to the move of u to Λ − j are the edges of E G ( u, Λ j,ℓ ). On theother hand, each edge ( u, v ) ∈ E G ( u, Λ j,>ℓ ) belonged to set Π j,ℓ before the move of u , and is removedfrom that set afterwards. Moreover, vertex v must have been moved to Λ − j at some time during thecourse of the current ( j, ℓ )-phase. Since | E G ( u, Λ j,>ℓ ) | ≥ ≤ ( j,ℓ ) ( u ) ≥ ( j,ℓ ) ( u ), we can select,for every edge e ∈ E G ( u, Λ j,ℓ ) arbitrary two edges e , e ∈ E G ( u, Λ j,>ℓ ) that become responsible for e , such that every edge of E G ( u, Λ j,>ℓ ) is responsible for at most one edge of E G ( u, Λ j,ℓ ). It is clearfrom this process that every edge in ˆΠ j,ℓ is responsible for at most one edge in ˆΠ U j,ℓ . We conclude that | ˆΠ U j,ℓ | ≤ | ˆΠ j,ℓ | / inc U j,ℓ ≤ inc j,ℓ / 2. Since inc j,ℓ = P X ∈{ D,K,U ,U } inc Xj,ℓ , we get that inc U j,ℓ ≤ inc Dj,ℓ + inc Kj,ℓ + inc U j,ℓ .Combining Lemma 3.23 and Lemma 3.24, we obtain the following corollary. Corollary 3.25 inc j ≤ O ( n ≤ j h j ∆ /ϕ ) . Lastly, the following corollary allows us to bound the total number of U -moves. Corollary 3.26 The total number of U -moves into the buffer layer Λ − j , over the course of the entirealgorithm, is bounded by O ( n ≤ j ∆ /ϕ ) . Proof: Consider some index 1 ≤ ℓ < L j , some ( j, ℓ )-phase, and some vertex u ∈ Λ j,ℓ , that is moved toΛ − j via a U -move some time during that ( j, ℓ )-phase. From the definition of a U -move, when u wasmoved to Λ − j , | E G ( u, Λ j,>ℓ ) | ≥ ≤ ( j,ℓ ) ( u ) held. Moreover, each edge ( u, v ) ∈ E G ( u, Λ j,>ℓ ) belongedto set Π j,ℓ before the move of u , and is removed from that set afterwards. Since | E G ( u, Λ j,>ℓ ) | ≥ ≤ ( j,ℓ ) ( u ) and | E G ( u, Λ j,>ℓ ) | + deg ≤ ( j,ℓ ) ( u ) = deg ≤ j ( u ), we get that | E G ( u, Λ j,>ℓ ) | ≥ deg ≤ j ( u ) / ≤ j ( u ) ≥ h j must hold, and so | E G ( u, Λ j,>ℓ ) | ≥ h j / 3. There-fore, for every vertex that is added to Λ − j via a U -move, we delete at least h j / j,ℓ .Since, as shown above, the total number of edges that are ever added to sets Π j,ℓ , for all 1 ≤ ℓ < L j is bounded by O ( n ≤ j h j ∆ /ϕ ), the total number of U -moves into Λ j over the entire course of thealgorithm is bounded by O ( n ≤ j ∆ /ϕ ).To summarize, we have partitioned all moves into the buffer sublayer Λ − j into four types: D, A, K and U , and we showed that the total number of moves of each type, over the course of the algorithm,is bounded by O ( n ≤ j ∆ /ϕ ). Therefore, the total number of moves into Λ − j over the course of thealgorithm is at most O ( n ≤ j ∆ /ϕ ) ≤ b O ( n ≤ j ∆). The main result of this subsection is summarized in the following lemma, which shows that, throughoutthe algorithm, for every vertex v ∈ V ( G ), there is a path of length O (log n ), connecting v to somevertex that lies in one of the cores of S j F j . This fact will be used to process To-Core-Path queries. Lemma 3.27 Throughout the algorithm, for each vertex u ∈ V ( G ) , there is a path P u of length atmost O (log n ) , connecting u to a vertex v lying in set ˆ K = S j,ℓ ˆ K j,ℓ . Moreover, the path P u =27 u = u , u , . . . , u k } is non-decreasing with respect to the sublayers, that is, if u i ∈ Λ j,ℓ , then u i +1 ∈ Λ We use the following two claims. Claim 3.28 Throughout the algorithm, for all ≤ j ≤ r , every vertex u in the buffer sublayer Λ j,L j has a neighbor in Λ Recall that Invariant I1 guarantees that | Λ j,L j | ≤ n ≤ j / L j − , and, from the choice of L j , n ≤ j / L j − ≤ h j / 2. From the definition of virtual degrees of vertices, for every vertex u ∈ Λ − j , | E G ( u, Λ ≤ j ) ≥ h j must hold. Therefore, u must have a neighbor in Λ Throughout the algorithm, for all ≤ ℓ < L j , every vertex u ∈ U j,ℓ has a path P u oflength at most O (log n ) connecting it to a vertex in Λ Fix an index 1 ≤ ℓ < L j , and consider the sublayer Λ j,ℓ over the course of some ( j, ℓ )-phase.Below, we add a superscript (0) to an object to denote that object at the beginning of the phase.Recall that the algorithm for computing a core decomposition from Theorem 3.13 ensured then thereis an orientation of the edges of the graph G [ U j,ℓ ], such that the resulting directed graph D j,ℓ is aDAG, and, for every vertex u ∈ U j,ℓ , in-deg D j,ℓ ( u ) ≤ deg (0) ≤ ( j,ℓ ) ( u ) / 12. We denote by D (0) j,ℓ the graph D j,ℓ at the beginning of the phase.Let D (0) j,ℓ be the directed graph obtained from D (0) j,ℓ , by adding the set Λ (0) 12 by the property of D (0) j,ℓ ≤ deg ≤ ( j,ℓ ) ( u ) / , and so out-deg D j,ℓ ( u ) = deg ≤ ( j,ℓ ) ( u ) − in-deg D j,ℓ ( u ) ≥ · in-deg D j,ℓ ( u ) . (3)For any vertex set S ⊆ V ( D j,ℓ ), let in-vol D j,ℓ ( S ) = P u ∈ S in-deg D j,ℓ ( u ), out-vol D j,ℓ ( S ) = P u ∈ S out-deg D j,ℓ ( u ),and vol D j,ℓ ( S ) = in-vol D j,ℓ ( S ) + out-vol D j,ℓ ( S ). For a vertex set S ⊆ V ( D j,ℓ ), we denote by S ′ the setof vertices containing all vertices of S , and all vertices v ∈ V ( D j,ℓ ), such that edge ( u, v ) with u ∈ S belongs to the graph D j,ℓ . In other words, S ′ is an “out-ball” around S of radius 1.Next, we show that, for any vertex set S ⊆ U j,ℓ , vol D j,ℓ ( S ′ ) ≥ vol D j,ℓ ( S ). Indeed:28ol D j,ℓ ( S ′ ) = vol D j,ℓ ( S ) + vol D j,ℓ ( S ′ \ S ) ≥ vol D j,ℓ ( S ) + | E D j,ℓ ( S, S ′ \ S ) | = vol D j,ℓ ( S ) + | E D j,ℓ ( S, S ′ ) | − | E D j,ℓ ( S, S ) |≥ vol D j,ℓ ( S ) + out-vol D j,ℓ ( S ) − in-vol D j,ℓ ( S ) , where the last inequality follows from the fact that | E D j,ℓ ( S, S ′ ) | = out-vol D j,ℓ ( S ) and | E D j,ℓ ( S, S ) | ≤ in-vol D j,ℓ ( S ). From Equation (3), out-vol D j,ℓ ( S ) ≥ D j,ℓ ( S ). Therefore, out-vol D j,ℓ ( S ) − in-vol D j,ℓ ( S ) ≥ vol D j,ℓ ( S ) / 3. We conclude that:vol D j,ℓ ( S ′ ) ≥ vol D j,ℓ ( S ) + vol D j,ℓ ( S ) / D j,ℓ ( S ) . It is now easy to see that, for any vertex u ∈ U j,ℓ , the distance from u to Λ To-Core-Path Queries In this subsection, we provide some additional data structures that are needed to support Short-Core-Path and To-Core-Path Queries, and analyze the total update time of the main algorithm for Theorem 3.4.We start by analyzing the total update time required for maintaining all data structures that wehave described so far. Then, we describe additional data structures that we maintain for supporting Short-Core-Path , To-Core-Path , and Short-Path queries, and analyze their update time. Maintaining the Sublayers. Recall that, from Observation 3.5, the total update time that isneeded to maintain the partition of V ( G ) into Λ , . . . , Λ r is ˜ O ( m ). As shown in Section 3.10,theedge-incident data structures for all vertices require total update time b O ( m ∆ ).Consider now some index 1 ≤ j ≤ r . For the buffer sublayer Λ − j , we do not need to maintain anyadditional data structures. Consider now some non-buffer sublayer Λ j,ℓ , for ℓ < L j . At the beginningof a ( j, ℓ )-phase, we construct the graph H j,ℓ by setting E ( H j,ℓ ) ← S u ∈ Λ j,ℓ Edges j,ℓ ( u ), using theincident-edge data structure. This takes O ( | E ( H j,ℓ ) | ) time. Note that, without the incident-edge datastructure, it is not immediately clear how to construct the graph H j,ℓ in this time. The resultinggraph H j,ℓ is precisely G [Λ j,ℓ ], as desired. Next, we perform the core decomposition in graph H j,ℓ using the algorithm from Theorem 3.13. The running time of the algorithm is b O ( | E ( H j,ℓ ) | ). Recallthat, from Observation 3.3, the edge set E G (Λ j ) has an ( h j ∆)-orientation. Moreover, from InvariantI1, | Λ j,ℓ | ≤ n ≤ j ℓ − . Therefore, | E ( H j,ℓ ) | ≤ | Λ j,ℓ | · h j ∆ ≤ n ≤ j ℓ − · h j ∆. By Lemma 3.18, the total numberof ( j, ℓ )-phases over the course of the algorithm is bounded by b O (2 ℓ ∆). Therefore, the total time thatis needed to construct the graphs H j,ℓ and to compute core decompositions of such graphs over thecourse of the entire algorithm is bounded by b O ( n ≤ ℓ h j ∆ / ℓ ) · b O (2 ℓ ∆) = b O ( n ≤ j h j ∆ ) ≤ b O ( m ∆ ).Note that it is straightforward to check that invariants I1 and I3 hold over the course of the algorithm:For each 1 ≤ j ≤ r and 1 ≤ ℓ ≤ L j we need to ensure that | Λ j,ℓ | ≤ n ≤ j / ℓ − always holds. Thiscan be checked in constant time by keeping track of | Λ j,ℓ | . For each vertex u ∈ U j,ℓ , we need toensure the invariant that deg ≤ ( j,ℓ ) ( u ) ≥ deg (0) ≤ ( j,ℓ ) ( u ) / O (log n )by maintaining prefix sums of | Edges j ′ ( u ) | and | Edges j,ℓ ′ ( u ) | for all j ′ < j and ℓ ′ ≤ ℓ . As there are b O (log n ) sublayers, the total cost for maintaining all sublayers Λ j,ℓ and their corresponding graphs H j,ℓ = G [Λ j,ℓ ], together with computing the initial core decompositions F j,ℓ is at most b O ( m ∆ ). Oracles for Short-Core-Path queries. Whenever a core K is created, we maintain the data structurefrom Theorem 3.9 that allows us to maintain the core K under the deletion of edges from G , and tosupport Short-Core-Path ( K, u, v ) queries within the core K . Consider some index 1 ≤ j ≤ r and a non-buffer sublayer Λ j,ℓ of Λ j . At the beginning of each ( j, ℓ )-phase, let F j,ℓ denote the collection of coresconstructed by the algorithm from Theorem 3.13. The total update time needed to maintain the datastructure for all cores K ∈ F j,ℓ throughout a single ( j, ℓ )-phase is P K ∈F j,ℓ O ( | E ( K ) | /q ( γ ( n )) O ( q ) ) ≤ O ( | E ( H j,ℓ ) | /q ( γ ( n )) O ( q ) ). As observed already, | E ( H j,ℓ ) | ≤ | Λ j,ℓ |· h j ∆ ≤ n ≤ j h j ∆ / ℓ − ≤ O ( m ∆ / ℓ )Since, from Lemma 3.18, the total number of ( j, ℓ )-phases over the course of the algorithm is boundedby b O (2 ℓ ∆), the total time needed to maintain all cores within the layer ( j, ℓ ) over the course of thealgorithm is bounded by O ( m /q ∆ /q ( γ ( n )) O ( q ) ). Summing this up over all O (log n ) non-buffersublayers Λ j,ℓ , we get that the total time that is needed to maintain all cores that are ever present in F j is bounded by O ( m /q ∆ /q ( γ ( n )) O ( q ) ).Note that the algorithm from Theorem 3.9 directly supports the Short-Core-Path ( K, u, v ) queries: givenany pair of vertices u and v that lie in the same core K , it return a simple u - v path P in K connecting31 to v , of length at most ( γ ( n )) O ( q ) , in time ( γ ( n )) O ( q ) . ES-trees for To-Core-Path queries. At the beginning of a ( j, ℓ )-phase of a non-buffer sublayer Λ j,ℓ ,we construct an auxiliary graph C j,ℓ for maintaining short paths from vertices of U j,ℓ to vertices ofΛ To-Core-Path queries. For convenience, we will slightly abuse notation. For eachnon-buffer sublayer Λ j,ℓ , the ES-tree T j,ℓ is formally a subgraph of C j,ℓ , but we will treat T j,ℓ as asubgraph of G [Λ ≤ j ] as follows. Each edge ( s j,ℓ , u ) ∈ T j,ℓ where u ∈ U j,ℓ corresponds to some edge( u, w ) ∈ E ( U j,ℓ , Λ Short-Path queries. For an index 1 ≤ j ≤ r , we denote T ≤ j = S j ′ ≤ j,ℓ ≥ T j ′ ,ℓ . Recall that F ≤ j is the collection of all cores for layers Λ , . . . , Λ j . Let ˆ K ≤ j = S K ∈F ≤ j K denote the union of all the cores in F ≤ j . Note that ˆ K ≤ j and T ≤ j subgraphs of G [Λ ≤ j ], and they donot share any edges.We maintain a fully dynamic minimum spanning forest M j for the graph G [Λ ≤ j ], with the followingedge lengths. We assign weight 0 to all edges in ˆ K ≤ j , weight 1 to all edges of T ≤ j , and weight 232o all remaining edges of G [Λ ≤ j ]. The spanning forest M j can be maintained using the algorithm of[HdLT01], that has O (log n ) amortized update time.Additionally, we use the top tree data structure due to [AHdLT05], whose properties are summarizedin the following lemma. Lemma 3.30 (Top Tree Data Structure from [AHdLT05]) The top tree data structure T isgiven as input a forest F with weights on edges, that undergoes edge insertions and edge deletions(but we are guaranteed that F remains a forest throughout the algorithm), and supports the followingqueries, in O (log n ) time per query: • connect ( x, y ) : given two vertices x and y , determine whether x and y are in the same connectedcomponent of F (see Section 2.4 of [AHdLT05]); • minedge ( x, y ) : given two vertices x and y lying in the same connected component of F , returna minimum-weight edge on the unique path connecting x to y in F (a variation of Theorem 4 of[AHdLT05]); • weight ( x, y ) : given two vertices x and y lying in the same connected component of F , return thetotal weight of all edges lying on the unique path connecting x to y in F (Lemma 5 of [AHdLT05]). • jump ( x, y, d ) : given two vertices x and y lying in the same connected component of F , returnthe d th vertex on the unique path connecting x to y in F ; if the path connecting x to y containsfewr than d vertices, return ∅ (Theorem 15 of [AHdLT05]).The data structure has O (log n ) worst-case update time. For all 1 ≤ j ≤ r , we maintain the top tree data structure T M j for the forest M j .It is easy to see that the total time that is required for maintaining the minimum spanning forests { M j } rj =1 and their corresponding top tree data structures T M j is subsumed by other components ofthe algorithm.To conclude, the total update time of the LCD data structure for Theorem 3.4 is b O ( m /q ∆ /q ( γ ( n )) O ( q ) ). Short-Path Queries In this section, we fix an index 1 ≤ j ≤ r , and describe an algorithm for processing Short-Path ( j, · , · )queries. Recall that we have denoted ˆ K ≤ j = S K ∈F ≤ j K and T ≤ j = S j ′ ≤ j,ℓ ≥ T j ′ ,ℓ . We start byanalyzing the structure of the spanning forest M j .Recall that T ≤ j is a forest, and every tree in this forest is rooted in a vertex of ˆ K ≤ j . Moreover, if avertex of T ≤ j does not serve as a tree root, then it does not lie in ˆ K ≤ j , and every vertex in Λ ≤ j \ ˆ K ≤ j must lie in some tree in T ≤ j . Recall also that, from Lemma 3.27, the depth of every tree in T ≤ j isbounded by O (log n ).Consider now some connected component C of graph G [Λ ≤ j ]. Let F C denote the collection of allcores K ∈ F ≤ j with K ⊆ C , and let k C = |F C | . Recall that, from Observation 3.16, k C ≤ O ( | V ( C ) | / ( ϕ h j )) = O ( | V ( C ) | ( γ ( n )) /h j ).The following two observations easily follow from the properties of a minimum spanning tree. Observation 3.31 Let C be a connected component of G [Λ ≤ j ] , and let K ∈ F C be a core thatcurrently lies in F ≤ j and is contained in C . Then there is some connected sub-tree T ∗ of the forest M j that contains every vertex of K . roof: Assume otherwise. Consider the sub-graph of M j induced by the edges of E ( K ). Then thisgraph is not connected, and moreover, there must be two connected components X and Y of thisgraph, such that core K contains some edge e connecting a vertex of X to a vertex of Y . Addingthe edge e to M j must create some cycle R . We claim that at least one edge on this cycle must haveweight greater than 0. Indeed, otherwise, every edge on cycle R lies in the core K , and so X and Y cannot be two connected components of the subgraph of M j induced by E ( K ). Since edge e hasweight 0, we have reached a contradiction to the minimality of the forest M j . Observation 3.32 Every edge of the forest T ≤ j belongs to M j . Proof: Assume for contradiction that this is not the case, and let T ′ be a tree of T ≤ j with E ( T ′ ) E ( M j ) As before, consider the sub-graph of M j induced by the edges of E ( T ′ ). This graph is notconnected, and, so there must be two connected components X and Y of this graph, such that tree T ′ contains some edge e connecting a vertex of X to a vertex of Y . Adding the edge e to M j must createsome cycle R . We claim that at least one edge on this cycle must have weight 2. Indeed, otherwise,every edge on cycle R has weight 0 or 1. This is impossible because tree T ′ contains exactly one vertexthat lies in a core of F ≤ j , and the only edges whose weight is 0 are edges that are contained in thecores. Therefore, there must be some edge e ′ on the cycle R that is incident to some vertex of T ′ , isnot contained in T ′ , and is not contained in any core of F ≤ j . The weight of such an edge then mustbe 2. But, since the weight of the edge e is 1, this contradicts the minimality of M j .Consider again some connected component C of the graph G [Λ ≤ j ], and recall that M j is a minimumspanning forest of G [Λ ≤ j ]. Let M Cj ⊆ M j be the unique tree in the forest M j that is spanning C .From the above two observations it is easy to see that, if we delete all weight-2 edges from M Cj , thenwe will obtain k C connected components. Therefore, we obtain the following immediate corollary. Corollary 3.33 For every connected component C of G [Λ ≤ j ] , tree M Cj contains at most k C − edgesof weight . Consider now any path P in the forest M j . Recall that all edges of P have weights in { , , } . For x ∈ { , , } , an x -block of the path P is a maximal subpath of P such that every edge on the subpathhas weight exactly x . We need the following observation on the structure of such paths. Observation 3.34 Let P be any path in the spanning forest M j , and let C be the connected componentof G [Λ ≤ j ] containing P . Then:1. there are at most k C − edges of weight in P ;2. the number of -blocks in P is at most k C ; and3. the number of -blocks in P is at most k C , with each -block having length at most O (log n ) . Proof: The first assertion follows immediately from Corollary 3.33, and the second assertion followsimmediately from Observation 3.31 and the fact that at most k C cores of F ≤ j are contained in C .In order to prove the third assertion, let Σ be a collection of paths that is obtained by removing allweight-2 edges from P . Then, from Corollary 3.33, | Σ | ≤ k C − 1. Moreover, since every tree in T ≤ j contains exactly one vertex of ˆ K ≤ j , for each such path σ ∈ Σ, there is at most one core K ∈ F ≤ j ,such that the edges of K lie on σ , and if such a core exists, then, from Observation 3.31, the edgesof K appear contiguously on σ . Therefore, every path σ contains at most two 1-blocks, and the totalnumber of 1-blocks on P is at most 2 k C . Since every tree in T ≤ j has depth at most O (log n ), thelength of each such 1-block is at most O (log n ). 34he following corollary follows immediately from Observation 3.34 and the fact that for every con-nected component C of Λ ≤ j , k C = O ( | V ( C ) | ( γ ( n )) /h j ). Corollary 3.35 Let P be any path contained in the graph M j , and let C be the connected componentof Λ ≤ j containing P . Then the total number of edges of P that have non-zero weight is at most ˜ O ( k C ) ≤ ˜ O ( | V ( C ) | ( γ ( n )) /h j ) . We are now ready to describe an algorithm for processing a query Short-Path ( j, u, v ). Our first stepis to check whether u and v are connected in M j . This can be done in time O (log n ), using the connect ( u, v ) query in the top tree T M j data structure. If u and v are not connected in M j , then weterminate the algorithm and report that u and v are not connected in G [Λ ≤ j ]. We assume from nowon that u and v are connected in M j .We denote by P the unique path connecting u and v in M j . Note that our algorithm does not computethe path P explicitly, as it may be too long. We think of the path P as being oriented from u to v . Let B , . . . , B z be the sequence of all maximal 0-blocks on path P ; we assume that the blocks are indexedin the order of their appearance on P . For 1 ≤ i ≤ z , we denote by b i and by b ′ i the first and the lastendpoint of B i , respectively. For 1 ≤ i < z , let A i be the sub-path of P connecting b ′ i to b i +1 ; let A be the sub-path of P connecting u to b , and let A z be the sub-path of P connecting b ′ z to v . Thenext step in our algorithm is to identify all endpoints of the 0-blocks on path P , that is, the algorithmwill find the parameter z (the number of the maximal 0-blocks on P ), and, for all 1 ≤ i ≤ z , it willcompute the endpoints b i , b ′ i of block B . We do so using queries minegdge , weight , and jump to thetop tree T M j data structure.Specifically, we start by running query minedge ( u, v ) on the top tree T M j . Let e = ( x, y ) be thereturned edge. If the weight of the edge is greater than 0, then there are no 0-weight edges on path P , and so we can skip the current step. Assume therefore that the weight of the edge e is 0. Let B i be the 0-block containing e (note that we do not know the index i ). In order to find the first endpoint b i of the 0-block B i , we perform a binary search using queries jump ( x, u, d ) for various values of d . If a d is the vertex returned in response to query jump ( x, u, d ), then we use query weight ( x, a d ) to findthe total weight of all edges on the sub-path of P connecting x to a d . If the returned weight is 0, thenwe know that a d ∈ B i , and we increase the value of d ; otherwise, we know that the sub-path of P between b i and x contains fewer than d edges, and we decrease d accordingly. Therefore, using binarysearch, in O (log n ) iterations, we can compute the endpoint b i of the block B i , and we can computethe other endpoint b ′ i of the block similarly. The total time needed to compute both endpoints istherefore O (log n ). Once we compute the endpoints b i , b ′ i , we recursively apply the same algorithmto the sub-path of P connecting u to b i , and the sub-path of P connecting b ′ i to v . We conclude thatwe can compute the number z of the maximal 0-blocks on path P , and the endpoints of these blocks,in time O ( z log n ).Once the endpoints of all 0-blocks are computed, we compute the paths A , . . . , A z , using queries jump ( a, a ′ , ≤ i ≤ z , we run query Short-Core-Path ( K i , b i , b ′ i ), where K i is thecore of F ≤ j containing b i and b ′ i , to compute a path B ′ i in core K i that connects b i to b ′ i and haslength at most ( γ ( n )) O ( q ) . We then return a u - v path that is obtained by concatenating the paths A , B ′ , A , . . . , B ′ z , A z .We use the following lemma to bound the length of the resulting path. Lemma 3.36 Given query Short-Path ( j, u, v ) , the above algorithm either correctly reports that u and v are not connected in G [Λ ≤ j ] in time O (log n ) , or it returns a simple u - v path P ′ of length at most O ( | V ( C ) | ( γ ( n )) O ( q ) /h j ) , in time O ( | P ′ | · (cid:0) γ ( n )) O ( q ) (cid:1) , where C is the connected component of G [Λ ≤ j ] containing u and v . roof: It is immediate to see that, if u and v are not connected in G [Λ ≤ j ], then the algorithmreports this correctly in time O (log n ). Therefore, we assume from now on that some connectedcomponent C of G [Λ ≤ j ] contains u and v . As before, we denote by P the unique u - v path in thegraph M j . Note that, from Corollary 3.35, the total number of edges on P with non-zero weight isat most ˜ O ( | V ( C ) | ( γ ( n )) /h j ). In particular, the number of maximal 0-blocks on P must be boundedby z ≤ ˜ O ( | V ( C ) | ( γ ( n )) /h j ). Since we are guaranteed that, for all 1 ≤ i ≤ z , the length of thepath B ′ i is bounded by ( γ ( n )) O ( q ) , we conclude that the length of the returned path P ′ is at most O ( | V ( C ) | ( γ ( n )) O ( q ) /h j ).In order to bound the running time, recall that detecting the endpoints of the 0-blocks takes time O ( z · log n ). Computing all vertices on paths A , . . . , A z takes time O (log n ) per vertex. Lastly,computing the paths B ′ , . . . , B ′ z takes total time at most z · ( γ ( n )) O ( q ) . Altogether, the running timeis bounded by O ( | P ′ | · (cid:0) γ ( n )) O ( q ) (cid:1) . This section is dedicated to the proof of Theorem 1.1. The main idea is identical to that of [CK19],who use the framework of [Ber17], combined with a weaker version of the LCD data structure. Theimprovements in the guarantees that we obtain follow immediately by plugging the new LCD datastructure from Section 3 into their algorithm. We still include a proof for completeness, since our LCD data structure is defined somewhat differently. As is the standard practice in such algorithms,we treat each distance scale separately. We prove the following theorem that allows us to handle asingle distance scale. Theorem 4.1 There is a deterministic algorithm, that, given a simple undirected n -vertex graph G with weights on edges that undergoes edge deletions, together with a source vertex s ∈ V ( G ) andparameters ǫ ∈ (1 /n, and D > , supports the following queries: • dist-query D ( s, v ) : in time O (1) , either correctly report that dist G ( s, v ) > D , or return anestimate g dist ( s, v ) . Moreover, if D ≤ dist G ( s, v ) ≤ D , then dist G ( s, v ) ≤ g dist ( s, v ) ≤ (1 + ǫ ) dist G ( s, v ) must hold. • path-query D ( s, v ) : either correctly report that dist G ( s, v ) > D in time O (1) , or return a s - v path P in time b O ( | P | ) . Moreover, if D ≤ dist G ( s, v ) ≤ D , then the length of P must be bounded by (1 + ǫ ) dist G ( s, v ) . Path P may not be simple, but an edge may appear at most once on P .The total update time of the algorithm is b O ( n /ǫ ) . We provide a proof of Theorem 4.1 below, after we complete the proof of Theorem 1.1 using it, viastandard arguments.We will sometimes refer to edge weights as edge lengths, and we denote the length of an edge e ∈ E ( G )by ℓ ( e ). We assume that the minimum edge weight is 1 by scaling, so the maximum edge weight is L .For all 0 ≤ i ≤ ⌈ log( Ln ) ⌉ , we maintain a data structure from Theorem 4.1 with the distance parameter D i = 2 i . Therefore, the total update time of our algorithm is bounded by b O ( n ( log Lǫ )), as required.In order to respond to a query dist-query ( s, v ), we perform a binary search on the values D i , and runqueries dist-query D i ( s, v ) in the corresponding data structure. Clearly, we only need to perform atmost O (log log( Ln )) such queries, in order to respond to query dist-query ( s, v ).36n order to respond to path-query ( s, v ), we first run the algorithm for dist-query ( s, v ) in order to identifya distance scale D i , for which D i ≤ dist G ( s, v ) ≤ D i holds. We then run query path-query D i ( s, v ) inthe corresponding data structure.In order to complete the proof of Theorem 1.1, it now remains to prove Theorem 4.1, which we do inthe remainder of this section.Recall that we have denoted by ℓ ( e ) the length/weight of the edge e of G . We use standard edge-weightrounding to show that we can assume that D = ⌈ n/ǫ ⌉ and that all edge lengths are integers between1 and 4 D . In order to achieve this, we discard all edges whose length is greater than 2 D , and we setthe length of each remaining edge e to be ℓ ′ ( e ) = ⌈ nℓ ( e ) / ( ǫD ) ⌉ . For every pair u, v of vertices, let dist ′ ( u, v ) denote the distance between u and v with respect to the new edge length values. Notice thatfor all u, v , nǫD dist ( u, v ) ≤ dist ′ ( u, v ) ≤ nǫD dist ( u, v ) + n , since the shortest s - v path contains at most n edges. Moreover, if dist ( u, v ) ≥ D , then n ≤ dist ( u, v ) · nD , so dist ′ ( u, v ) ≤ nǫD dist ( u, v ) + nD dist ( u, v ) ≤ nǫD dist ( u, v )(1 + ǫ/ D ≤ dist ( u, v ) ≤ D , then (cid:6) nǫ (cid:7) ≤ dist ′ ( u, v ) ≤ (cid:6) nǫ (cid:7) .Therefore, from now on we can assume that D = ⌈ n/ǫ ⌉ , and for simplicity, we will denote the newedge lengths by ℓ ( e ) and the corresponding distances between vertices by dist ( u, v ). From the abovediscussion, all edge lengths are integers between 1 and 4 D . It is now enough to prove Theorem 4.1 forthis setting, provided that we ensure that, whenever D ≤ dist ( s, v ) < D holds, we return a path oflength at most (1 + ǫ/ dist ( s, v ) in response to query path-query ( v ). The Algorithm. Let m denote the initial number of edges in the input graph G . We partition alledges of G into λ = ⌊ log(4 D ) ⌋ classes, where for 0 ≤ i ≤ λ , edge e belongs to class i iff 2 i ≤ ℓ ( e ) < i +1 .We denote the set of all edges of G that belong to class i by E i . Fix an index 1 ≤ i ≤ λ , and let G i be the sub-graph of G induced by the edges in E i . We view G i as an unweighted graph and maintainthe LCD data structure from Theorem 3.4 on G i with parameter ∆ = 2 and q = log / n using totalupdate time b O ( m /q ∆ /q ) = b O ( m ). Recall that γ ( n ) = exp( O (log / n )).We let α = ( γ ( n )) O ( q ) = b O (1) be chosen such that, in response to query Short-Path ( j, u, v ), the LCD data structure must return a path of length at most | V ( C ) | · α/h j , where C denotes the connectedcomponent of graph G [Λ ≤ j ] containing u and v . We use the parameter τ i = nλαǫD · i that is associatedwith graph G i . This parameter is used to partition the vertices of G into a set of vertices that are heavy with respect to class i , and vertices that are light with respect of class i . Specifically, welet U i = n v ∈ V ( G i ) | g deg G i ( v ) ≥ τ i o be the set of vertices that are heavy for class i , and we let U i = V ( G i ) \ U i be the set of vertices that are light for class i .Next, we define the heavy and the light graph for class i . The heavy graph for class i , that is denotedby G Hi , is defined as G i [ U i ]. In other words, its vertex set is the set of all vertices that are heavy forclass i , and its edge set is the set of all class- i vertices whose both endpoints are heavy for class i . The light graph for class i , denoted by G Li , is defined as follows. Its vertex set is V ( G i ), and its edge setcontains all edges e ∈ E i , such that at least one endpoint of e lies in U i . Notice that we can exploitthe LCD data structure to compute the initial graphs G Hi and G Li , and to maintain them, as edges aredeleted from G .Our algorithm exploits the LCD data structure in two ways. First, observe that, from Observation 3.3,for all 1 ≤ i ≤ λ , the total number of edges that ever belong to the light graph G Li over the courseof the algorithm is bounded by O ( nτ i ). Additionally, we will exploit the Short-Path queries that the LCD data structure supports.Let j i be the largest integer, such that h j i ≥ τ i (recall that h j is the virtual degree of vertices in layerΛ j ). Given a query Short-Path ( j i , u, v ) to the LCD data structure on G i , where u and v lie in the sameconnected component C of G Hi , the data structure must return a simple u - v path in C , containing at37ost | V ( C ) | ατ i edges. Abusing the notation, we denote this query by Short-Path ( C, u, v ) instead.Let G L = S λi =1 G Li be the light graph for the graph G . Next, we define an extended light graph ˆ G L ,as follows. We start with ˆ G L = G L ; the vertices of G L are called regular vertices . Next, for every1 ≤ i ≤ λ , for every connected component C of G Hi , we add a vertex v C to ˆ G L , that we call a specialvertex , or a supernode , and connect it to every regular vertex u ∈ V ( C ) with an edge of length 1 / ≤ i ≤ λ , we use the CONN-SF data structure on graph G Hi , in order to maintain itsconnected components. The total update time of these connectivity data structures is bounded by O ( mλ ) ≤ O ( m log D ). The following observation follows immediately from the assumption that alledge lengths in G are at least 1. Observation 4.2 Throughout the algorithm, for every vertex v ∈ V ( G ) , dist ˆ G L ( s, v ) ≤ dist G ( s, v ) . The following theorem was proved in [CK19]; the proof follows the arguments from [Ber17] almostexactly. Theorem 4.3 (Theorem 4.4 in [CK19]) There is a deterministic algorithm, that maintains anapproximate single-source shortest-path tree T of graph ˆ G L from the source s , up to distance D .Tree T is a sub-graph of ˆ G L , and for every vertex v ∈ V ( ˆ G L ) , with dist ˆ G L ( s, v ) ≤ D , the dis-tance from s to v in T is at most (1 + ǫ/ dist ˆ G L ( s, v ) . The total update time of the algorithm is ˜ O (cid:16) nDǫ + | E ( G ) | + P e ∈ E Dǫℓ ( e ) (cid:17) , where E ( G ) is the set of edges that belong to G at the beginning of thealgorithm, and E is the set of all edges that are ever present in the graph ˆ G L . Recall that D = Θ( n/ǫ ). Since, for all 1 ≤ i ≤ λ , the total number of edges of E i ever present in ˆ G L isbounded by O ( nτ i ) = O (cid:0) n · nλαǫD · i (cid:1) = b O ( n · i ) from Observation 3.3, and since the total number ofedges incident to the special vertices that are ever present in ˆ G L is bounded by O ( nλ log n ) = ˜ O ( n ),we get that the running time of the algorithm from Theorem 4.3 is bounded by:˜ O n ǫ + λ X i =1 | E i | Dǫ · i ! = b O (cid:18) n ǫ (cid:19) . As other components take b O ( m ) time, the total update time of the algorithm for Theorem 4.1 is b O ( n /ǫ ), as required. It remains to show how the algorithm responds to queries path-query D ( s, v )and dist-query D ( s, v ). Responding to path-query D ( s, v ) . Given a query path-query D ( s, v ), we start by computing theunique simple s - v path P in the tree T given by Theorem 4.3. If vertex v is not in T , then clearly dist G ( s, v ) > D and so we report dist G ( s, v ) > D . From now, we assume v ∈ T . Next, we transformthe path P in ˆ G L into an s - v path P ∗ in the original graph G as follows.Let v C , . . . , v C z be all special vertices that appear on the path P . For 1 ≤ k ≤ z , let u k be the regularvertex preceding v C k on P , and let u ′ k be the regular vertex following v C k on P . If C k is a connectedcomponent of a heavy graph G Hi of class i , we use the query Short-Path ( C k , u k , u ′ k ) in the LCD datastructure for graph G i in order to to obtain a simple u k - u ′ k path Q k contained in C k , that contains at We note that our setting is slightly different from that of [Ber17], who used actual vertex degrees and not theirvirtual degrees in the definitions of the light and the heavy graphs. Our definition is identical to that of [CK19], thoughthey did not define the virtual degrees explicitly. However, they used Procedure Proc-Degree-Pruning in order to definethe heavy and the light graphs, and so their definition of both graphs is identical to ours, except for the specific choiceof the thresholds τ i ). | V ( C k ) | ατ i (unweighted) edges. Then, we replace the vertex v C k with the path Q k on path P . Aswe can find the path P in time O ( | P | ), by following the tree T , and since the query time to computeeach path Q k is bounded by | Q k | · ( γ ( n )) O ( q ) = b O ( | Q k | ), the total time to compute path P ∗ is boundedby b O ( | E ( P ∗ ) | ).We now bound the length of the path P ∗ . Recall that, by Observation 4.2, path P has length(1 + ǫ/ dist ˆ G L ( s, v ) ≤ (1 + ǫ/ dist G ( s, v ). For each 1 ≤ i ≤ λ , let C i = { C k | v C j ∈ P and C k is aconnected component of G Hi } . Let Q i be the set of all corresponding paths Q k of C k ∈ C i . We canbound the total length of all path in Q i as follows: X Q ∈Q i ℓ ( Q ) ≤ X C k ∈C i | Q k | · i +1 ≤ X C k ∈C i | V ( C k ) | ατ i · i +1 ≤ X C k ∈C i | V ( C k ) | · ǫD nλ ≤ ǫD λ (we have used the fact that τ i = nλαǫD · i , and that all components in C i are vertex-disjoint). Summingup over all λ classes, the total length of all paths Q k corresponding to the super-nodes on path P is at most ǫD/ 4. We conclude that ℓ ( P ∗ ) ≤ ℓ ( P ) + ǫD/ 4. If dist G ( s, v ) ≥ D , we have that ℓ ( P ∗ ) ≤ (1 + ǫ/ dist G ( s, v ) + ǫ dist G ( s, v ) / ǫ/ dist G ( s, v ). Notice that path P ∗ may not besimple, since a vertex may belong to several heavy graphs G Hi . However, for every edge e ∈ E ( G ),there is a unique index i such that e ∈ G i , and the sets of edges of the heavy graph G Hi and the lightgraph G Li are disjoint from each other. In particular, if e ∈ E ( G Hi ), then e ˆ G L . Since path P issimple, all graphs C , . . . , C z are edge-disjoint from each other, and their edges are also disjoint from E ( ˆ G L ). We conclude that an edge may appear at most once on P ∗ . Responding to dist-query D ( s, v ) . Given a query dist-query D ( s, v ), we simply return dist ′ ( s, v ) = dist T ( s, v ) + ǫD/ O (1). Recall that dist ′ ( s, v ) = dist T ( s, v ) + ǫD/ ≥ ℓ ( P ∗ ) ≥ dist G ( s, v ) (here, P ∗ is the path that we would have returned in response to query path-query D ( s, v ), though we onlyuse this path for the analysis and do not compute it expliclty). As before if dist G ( s, v ) ≥ D , then,from Observation 4.2, dist ′ ( s, v ) ≤ (1 + ǫ/ dist G ( s, v ). In this section, we prove Theorem 1.2 by combining two algorithms. We use the function γ ( n ) =exp( O (log / n )) from Theorem 3.4.The first algorithm, summarized in the next theorem, is faster in the large-distance regime: Theorem 5.1 (APSP for large distances) There is a deterministic algorithm, that, given param-eters < ǫ < / and D > , and a simple unweighted undirected n -vertex graph G that undergoesedge deletions, maintains a data structure using total update time of b O (cid:0) n / ( ǫ D ) (cid:1) and supports thefollowing queries: • dist-query D ( u, v ) : either correctly declare that dist G ( u, v ) > D in O (log n ) time, or return anestimate dist ′ ( u, v ) in O (log n ) time. If D ≤ dist G ( u, v ) ≤ D , then dist G ( u, v ) ≤ dist ′ ( u, v ) ≤ (1 + ǫ ) dist G ( u, v ) must hold. • path-query D ( u, v ) : either correctly declare that dist G ( u, v ) > D in O (log n ) time, or return a u - v path P of length at most D in b O ( | P | ) time. If D ≤ dist G ( u, v ) ≤ D , then | P | ≤ (1+ ǫ ) dist G ( u, v ) must hold. The second algorithm is faster for the short-distance regime.39 heorem 5.2 (APSP for small distances) There is a deterministic algorithm, that, given param-eters ≤ k < o (log / n ) and D > , and a simple unweighted undirected n -vertex graph G thatundergoes edge deletions, maintains a data structure using total update time b O ( n /k D ) and supportsthe following queries: • dist-query D ( u, v ) : in time O (1) , either correctly establish that dist G ( u, v ) > D , or correctlyestablish that dist ( u, v ) ≤ k · D + ( γ ( n )) O ( k ) . • path-query D ( u, v ) : either correctly establish that dist G ( u, v ) > D in O (1) time, or return a u - v path P of length at most k · D + ( γ ( n )) O ( k ) , in time O ( | P | ) + ( γ ( n )) O ( k ) . We prove Theorems 5.1 and 5.2 below, after we complete the proof of Theorem 1.2 using them. Let ǫ = 1 / 4, and D ∗ = n . − /k . For 1 ≤ i ≤ (cid:6) log ǫ n (cid:7) , let D i = (1 + ǫ ) i . For all 1 ≤ i ≤ (cid:6) log ǫ n (cid:7) , if D i ≤ D ∗ , then we maintain the data structure from Theorem 5.2 with the value D = D i , and the inputparameter k , and otherwise we maintain the data structure from Theorem 5.1 with the bound D = D i and the parameter ǫ . Since, from the statement of Theorem 1.2, k ≤ o (log / n ) holds, it is easy toverify that the total update time for maintaining these data structures is bounded by b O ( n . /k ).Given a query dist-query ( u, v ), we perform a binary search on indices i , in order to find an index forwhich dist G ( u, v ) > D i and dist G ( u, v ) < k · D i +1 + ( γ ( n )) O ( k ) hold, by querying the data structuresform Theorems 5.2 and 5.1. We then return g dist ( u, v ) = 2 k · · D i +1 + ( γ ( n )) O ( k ) as a response to thequery. Notice that we are guaranteed that g dist ( u, v ) ≤ k · · dist G ( u, v ) + b O (1), as required. As thereare O (log n ) possible values of D i , the query time is O (log n log log n ).Given a query path-query ( u, v ), we start by checking whether u and v are connected, for example byrunning dist-query D ( u, v ) query with D = (1 + ǫ ) n on the data structure from Theorem 5.1. If u and v are not connected, then we can report this in time O (log n ). Otherwise, we perform a binarysearch on indices i exactly as before, to find an index for which dist G ( u, v ) > D i and dist G ( u, v ) < k · D i +1 +( γ ( n )) O ( k ) hold. Then, we use query in the appropriate data structure, path-query D i +1 ( u, v )and obtain a u - v path P of length at most 2 k · D i +1 + ( γ ( n )) O ( k ) ≤ k · · dist G ( u, v ) + b O (1), in time b O ( | P | ). The goal of this section is to prove Theorem 5.1. The algorithm easily follows by combining ouralgorithm for SSSP with the algorithm of [GWN20] for APSP (that simplifies the algorithm of [FHN16]for the same problem). Data Structures and Update Time Our starting point is an observation of [GWN20], that we can assume w.l.o.g. that throughout the edgedeletion sequence, the graph G remains connected. Specifically, we will maintain a graph G ∗ , startingwith G ∗ = G . Whenever an edge e is deleted from G , as part of the input update sequence, if theremoval of e does not disconnect the graph G , then we delete e from G ∗ as well. Otherwise, we ignorethis edge deletion operation, and edge e remains in G ∗ . It is easy to see that in the latter case, edge e is a bridge in G ∗ , and will remain so until the end of the algorithm. It is also immediate to verify that,if u, v are two vertices that lie in the same connected component of G , then dist G ( u, v ) = dist G ∗ ( u, v ).Moreover, if P is any (not necessarily simple) path connecting u to v in graph G ∗ , such that an edgemay appear at most once on P , then P is also a u - v path in graph G .40hroughout the algorithm, we use two parameters: R c = ǫD/ R d = 4 D . We maintain thefollowing data structures. • Data structure CONN-SF ( G ) for dynamic connectivity. Recall that the data structure has totalupdate time ˜ O ( m ), and it supports connectivity queries conn ( G, u, v ): given a pair u, v of verticesof G , return “yes” if u and v are connected in G , and “no” otherwise. The running time to respondto each such query is O (log n/ log log n ). • A collection S ⊆ V ( G ) of source vertices , with | S | ≤ O ( n/R c ) ≤ O ( n/ ( ǫD )); • For every source vertex s ∈ S , the data structure from Theorem 4.1, in graph G ∗ , with sourcevertex s , distance bound R d , and accuracy parameter ǫ = 1 / b O ( n /ǫ ). Since we willmaintain O ( n/ ( ǫD )) such data structures, the total update time for maintaining them is b O ( n / ( ǫ D )).Consider now some source vertex s ∈ S , and the data structure from Theorem 4.1 that we maintainfor it. Since graph G is unweighted, all edges of G belong to a single class, and so the algorithm willonly maintain a single heavy graph (instead of maintaining a separate heavy graph for every edgeclass), and a single light graph. In particular, this ensures that at any time during the algorithm’sexecution, all cores in S j F j are vertex-disjoint. In order to simplify the notation, we denote theextended light graph that is associated with graph G ∗ by ˆ G L ; recall that this graph does not dependon the choice of the vertex s . Recall that, from Observation 4.2, throughout the algorithm, for everyvertex v ∈ V ( G ∗ ), dist ˆ G L ( s, v ) ≤ dist G ∗ ( s, v ) holds. Additionally, the data structure maintains an ES-Tree , that we denote by τ ( s ), in graph ˆ G L , that is rooted at the vertex s , and has depth R d . Wesay that the source s covers a vertex v ∈ V ( G ) iff the distance from v to s in the tree τ ( s ) is at most R c .Our algorithm will maintain, together with each vertex v ∈ V ( G ), a list of all source vertices s ∈ S that cover v , together with a pointer to the location of v in the tree τ ( s ). We also maintain a list ofall source vertices s ′ ∈ S with v ∈ τ ( s ′ ), together with a pointer to the location of v in τ ( s ′ ). Thesedata structures can be easily maintained along with the trees τ ( s ) for s ∈ S . The total update timefor maintaining the ES-Trees subsumes the additional required update time.We now describe an algorithm for maintaining the set S of source vertices. We start with S = ∅ .Throughout the algorithm, vertices may only be added to S , but they may never be deleted from S . At the beginning, before any edge is deleted from G , we initialize the data structure as follows.As long as some vertex v ∈ V ( G ) is not covered by any source, we select any such vertex v , addit to the set S of source vertices, and initialize the data structure τ ( v ) for the new source vertex v .This initialization algorithm terminates once every vertex of G is covered by some source vertex in S .As edges are deleted from G and distances between vertices increase, it is possible that some vertex v ∈ V ( G ) stops being covered by vertices of S . Whenever this happens, we add such a vertex v to theset S of source vertices, and initialize the corresponding data structure τ ( v ). We need the followingclaim. Claim 5.3 Throughout the algorithm, | S | ≤ O ( n/R c ) holds. Proof: For a source vertex s ∈ S , let C ( s ) be the set of all vertices at distance at most R c / s in graph ˆ G L . From the algorithm’s description, and since the distances between regularvertices in the graph ˆ G L may only grow over the course of the algorithm, for every pair s, s ′ ∈ S ofsource vertices, dist ˆ G L ( s, s ′ ) ≥ R c holds throughout the algorithm, and so C ( s ) ∩ C ( s ′ ) = ∅ . Since41raph G ∗ is a connected graph throughout the algorithm, so is graph ˆ G L . It is then easy to verify that,if | S | ≥ 2, then for every source vertex s ∈ S , | C ( s ) | ≥ Ω( | R c | ) (we have used the fact that graph G isunweighted, and so, in graph ˆ G L , all edges have lengths in { / , } ). It follows that | S | ≤ O ( n/R c ). Responding to path-query D ( x, y ) queries. Suppose we are given a query path-query D ( x, y ), where x, y are two vertices of G . Recall that our goal is to either correctly establishes that dist G ( x, y ) > D ,or to return an x - y path P in G , of length at most 9 D . We also need to ensure that, if D ≤ dist G ( x, y ) ≤ D , then | P | ≤ (1 + ǫ ) dist G ( x, y ).Our first step is to use query conn ( G, x, y ) in data structure CONN-SF ( G ) in order to check whether x and y lie in the same connected component of G . If this is not the case then we report that x and y are not connected in G . Therefore, we assume from now on that x and y are connected in G . Recallthat the running time for query conn ( G, x, y ) is O (log n/ log log n ).Recall that our algorithm ensures that there is some source vertex s ∈ S that covers x . Therefore, dist ˆ G L ( s, x ) ≤ R c . It is also easy to verify that dist ˆ G L ( x, y ) ≤ dist G ∗ ( x, y ) must hold. Therefore, if dist G ( x, y ) ≤ D , y ∈ τ ( s ) must hold. We can find the source vertex s that covers x and check whether y ∈ τ ( s ) in time O (1) using the data structures that we maintain. If y τ ( s ), then we are guaranteedthat dist G ( x, y ) > D . We terminate the algorithm and report this fact.Therefore, we assume from now on that y ∈ τ ( s ). We compute the unique simple x - y path P in thetree τ ( s ), by retracing the tree from x and y until we find their lowest common ancestor; this can bedone in time O ( | P | ). The remainder of the algorithm is similar to that for responding to queries forthe SSSP data structure. We denote by v C , . . . , v C z the sequence of all special vertices that appear onthe path P . For 1 ≤ k ≤ z , let u k be the regular vertex preceding v C k on P , and let u ′ k be the regularvertex following v C k on P . We then use queries Short-Path ( C k , u k , u ′ k ) to the LCD data structure inorder to obtain a simple u k - u ′ k path Q k contained in C k . Then, we replace the vertex v C k with thepath Q k on path P . As in the analysis of the algorithm for SSSP , the running time of this algorithmis bounded by b O ( | E ( P ∗ ) | ), and the length of the path P ∗ is bounded by | P | + ǫR d ≤ dist G ( x, y ) + 4 ǫD .Since | P | ≤ R d ≤ D , this is bounded by 9 D . Moreover, if D ≤ dist G ( x, y ) ≤ D , then we areguaranteed that the length of P ∗ is at most (1 + 4 ǫ ) dist G ( x, y ). The running time of the algorithm is O (log n ) if it declares that dist G ( x, y ) > D , and it is bounded by b O ( | P ∗ | ) if a path P ∗ is returned.We note that every edge may appear at most once on path P ∗ . Indeed, an edge of G ∗ may belongto the heavy graph, or to the extended light graph ˆ G L , but not both of them. Therefore, an edge of P may not lie on any of the paths in { Q , . . . , Q z } . Moreover, since path P is simple, the connectedcomponents C , . . . , C k of the heavy graph are all disjoint, and so the paths Q , . . . , Q z must he disjointfrom each other. Therefore, every edge may appear at most once on path P ∗ . As observed before,this means that P ∗ is contained in the graph G . Responding to dist-query D ( x, y ) . The algorithm is similar to that for path-query D ( x, y ). As before,our first step is to use query conn ( G, x, y ) in data structure CONN-SF ( G ) in order to check whether x and y lie in the same connected component of G . If this is not the case then we report that x and y are not connected in G . Therefore, we assume from now on that x and y are connected in G . Recallthat the running time for query conn ( G, x, y ) is O (log n/ log log n ).As before, we find a source s that covers vertex x , and check whether y ∈ τ ( s ), in time O (1). If this isnot the case, then we correctly report that dist G ( x, y ) > D , and terminate the algorithm. Otherwise,we return an estimate dist ′ ( x, y ) = dist ˆ G L ( x, s ) + dist ˆ G L ( y, s ) + 4 ǫD . This can be done in time O (1), byreading the distance labels of x and y in tree T ( s ). From the above arguments, we are guaranteed thatthere is an x - y path P ∗ in G , whose length is at most dist ′ ( x, y ), so dist G ( x, y ) ≤ dist ′ ( x, y ) must hold.42otice that dist ˆ G L ( y, s ) ≤ dist ˆ G L ( x, s ) + dist ˆ G L ( x, y ) ≤ R c + dist G ( x, y ). Therefore, dist ′ ( x, y ) ≤ R c +4 ǫD + dist G ( x, y ) ≤ ǫD + dist G ( x, y ). Therefore, if dist G ( x, y ) ≥ D , then dist ′ ( x, y ) ≤ (1+8 ǫ ) dist G ( x, y )must hold.In order to obtain the guarantees required in Theorem 5.1, we use the parameter ǫ ′ = ǫ/ 8, and run thealgorithm described above while using ǫ ′ instead of ǫ . It is easy to verify that the resulting algorithmprovides the desired guarantees. In this section, we prove Theorem 5.2. Recall that we are given a simple unweighted graph G under-going edge deletions, a parameter k ≥ D . We set ∆ = n /k and q = 10 k .Our data structure is based on the LCD data structure from Theorem 3.4. We invoke the algorithmfrom Theorem 3.4 on the input graph G , with parameters ∆ and q . Recall that the algorithm maintainsa partition of the vertices of G into layers Λ , . . . , Λ r +1 , and notice that r ≤ k + 1. Let α = ( γ ( n )) O ( q ) be chosen such that, in response to the Short-Core-Path and To-Core-Path queries, the length of thepath returned by the LCD data structure is guaranteed to be at most α . For every index 1 < j ≤ r ,we define two distance parameters: R dj called a distance radius and R cj called a covering radius asfollows: R dj = 2 r − j (3 D + 2 αk ) and R cj = R dj − D. Note that R dj ≤ k − · D + 2 k αk = O ( D · ( γ ( n )) O ( k ) ) for all j > 1. (As Λ = ∅ , we only give thebound for all j > LCD data structure maintains a collection F j of cores for eachlevel j > 1. We need the following key concept: Definition. A vertex v ∈ Λ j is a far vertex iff dist G ( v, Λ Rooted at Far Cores In this section, we define additional data structures that maintain ES-Trees that are rooted at the farcores, and analyze their total update time. Fix a layer 1 < j ≤ r . Let K ∈ F j be a core in layer j ,that is a far core. Let Z Kj be the graph obtained from Z j by adding a source vertex s K , and adding,for every vertex v ∈ V ( K ), an edge ( s K , v ). Whenever a core K is created in layer j , we check if K is a far core. If this is the case, then we initialize an ES-Tree T K in graph Z Kj , with source s K , anddistance bound ( R dj + 1). We maintain this data structure until core K is destroyed. Additionally,whenever an existing core K becomes a far core for the first time, we initialize the data structure T K ,and maintain it until K is destroyed.Observe that graph Z Kj may undergo both edge insertions and deletions. As before, an edge may beinserted into Z Kj only when some vertex x is moved from Λ LCD data structure is bounded by b O ( m /q ∆ /q ≤ b O ( mn /k ), as q = 10 k and ∆ = n /k . Each of theremaining data structures takes total update time at most b O ( n /k D ). Therefore, the total updatetime of the algorithm is bounded by b O ( n /k D ). For any vertex v ∈ Λ ≥ j , we say that v is covered by an ES-Tree T K iff dist Z j ( V ( K ) , v ) ≤ R cj (i.e. dist Z Kj ( s K , v ) ≤ R cj + 1). For each v ∈ Λ ≥ j , we maintain a list of all ES-Trees T K that cov-ers it. Within the list of v , we maintain the core K ∈ F j v from the smallest layer index j v such that T K covers v . These indices can be explicitly maintained using the standard dictionary data structuresuch as balanced binary search trees. The time for maintaining such lists for all vertices is clearlysubsumed by the time for maintaining the ES-Trees . Responding to path-query D ( u, v ) . Given a pair of vertices u and v , let K u be the core from smallestlevel j u such that T K u covers u and K v be the core from smallest level j v such that T K v covers v . Assumew.l.o.g. that j u ≤ j v . If v / ∈ T K u , then we report that dist G ( u, v ) > D . Otherwise, compute the unique u - v path P in the tree T K u . This can be done in time in time O ( | P | log n ), as follows. We maintaintwo current vertices u ′ , v ′ , starting with u ′ = u and v ′ = v . In every iteration, if the distance of u ′ from the root of T K u in tree T K u is less than the distance of v ′ from the root, we move v ′ to its parentin the tree; otherwise, we move u ′ to its parent. We continue this process, until we reach a vertex z that is a common ancestor of both u and v ′ . We denote the resulting u - v path by P . Notice that sofar the running time of the algorithm is O ( | E ( P ) | ). Next, we consider two cases. First, if z is notthe root of the tree T K u , then P is a path in graph G , and we return P . Otherwise, the root of thetree s K u lies on path P . We let a and b be the vertices lying immediately before and immediatelyafter s K u in P . We compute Q = Short-Core-Path ( K u , a, b ) in time ( γ ( n )) O ( q ) . Finally, we modifythe path P by replacing vertex s K u with the path Q , and merging the endpoints a , b of Q with thecopies of these vertices on path P . The resulting path, that we denote by P ′ , is a u - v path in graph G . We return this path as the response to the query. It is immediate to verify that the query time is O ( | E ( P ) | log n ) + ( γ ( n )) O ( q ) = b O ( | P | ).We now argue that the response of the algorithm to the query is correct.Let P ∗ be the shortest path between u and v in graph G . Let x be a vertex of P ∗ that minimizes theindex j ∗ for which x ∈ Λ j ∗ ; therefore, V ( P ∗ ) ⊆ Λ ≥ j ∗ . We start with the following crucial observation. Lemma 5.4 There is a far core K ′ in some level Λ j ′ , with < j ′ ≤ j ∗ , such that dist Z j ′ ( V ( K ′ ) , x ) ≤ R cj ′ − D . Proof: Let x = x . We gradually construct a path connecting x to a vertex in a far core K ′ , asfollows. First, using query To-Core-Path ( x ) of the LCD data structure, we can obtain a path of lengthat most α , connecting x to a vertex a lying in some core K , such that, if K ∈ F j , then j ≤ j ∗ .If K is a far core, then we are done. Otherwise, there is a vertex b in K which is not a far vertex.By using a query Short-Core-Path ( K , a , b ) of the LCD data structure, we obtain a path of length atmost α connecting a to b inside the core K . As b is not a far vertex, there must be some vertex x ∈ Λ 1) + 2 αk = R dj ′ − D = R cj ′ − D We conclude that that dist Z j ′ ( V ( K ′ ) , x ) ≤ R cj ′ − D .We assume w.l.o.g. that x is closer to u than v , that is, dist G ( u, x ) ≤ dist G ( v, x ). Assume that P ∗ haslength at most 2 D . As x lies in P ∗ and V ( P ∗ ) ⊆ Λ ≥ j ∗ , we get that dist Z j ∗ ( u, x ) ≤ D = D . As Z j ∗ is a subgraph of Z j ′ , we conclude that dist Z j ′ ( u, x ) ≤ dist Z j ∗ ( u, x ) ≤ D . Using the triangle inequalitytogether with Lemma 5.4, we get that dist Z j ′ ( u, V ( K ′ )) ≤ dist Z j ′ ( u, x ) + dist Z j ′ ( x, V ( K ′ )) ≤ R cj ′ . Inother words, tree T K ′ must cover u . Recall that we have let K u be the core lying in smallest level j u ,such that T K u covers u . Therefore, j u ≤ j ′ which implies that V ( P ∗ ) ⊆ Λ ≥ j u . Therefore, path P ∗ iscontained in Z j u . Moreover, as R dj u = R cj u + 2 D and | P ∗ | ≤ D , vertex v must be contained in T K u aswell. If this is not the case, then we can conclude that | P ∗ | > D . The same argument applies if theindex j v of the layer Λ j v to which the core K v belongs is smaller than j u .Let P be the unique u - v path in the tree T K u . Clearly, | P | ≤ dist T Ku ( s K u , u ) + dist T Ku ( s K u , v ) ≤ R dj u ≤ k · D + ( γ ( n )) O ( k ) . If the root vertex s K u of the tree does not lie on the path P , then path P is a u - v path in graph G , whose length is bounded by 2 k · D + ( γ ( n )) O ( k ) ; the algorithm thenreturns P . Otherwise, the algorithm replaces the vertex s K u with the path Q returned by the query Short-Core-Path ( K u , a, b ) to the LCD data structure, where a and b are the vertices of P appearingimmediately before and after s K u on it. As | Q | ≤ α , the length of returned path is bounded by2 R dj u + α ≤ k · D + ( γ ( n )) O ( k ) . Responding to dist-query D ( u, v ) . The algorithm for responding to dist-query D ( u, v ) is similar. Asbefore, we let K u be the core from smallest level j u such that T K u covers u , and we let K v be thecore from smallest level j v such that T K v covers v . Assume w.l.o.g. that j u ≤ j v . If v / ∈ T K u , thenwe report that dist G ( u, v ) > D . Otherwise, we declare that dist ( u, v ) ≤ k · D + ( γ ( n )) O ( k ) . Thecorrectness of this algorithm follows immediately from the analysis of the algorithm for respondingto path-query D ( u, v ). The algorithm can be implemented to run in time O (1) if we store, togetherwith every vertex v ∈ V ( G ), the list of the cores that cover v , sorted by the index j of the set F j towhich the core belongs. It is easy to see that time that is required to maintain this data structure issubsumed by the total update time of the algorithm that was analyzed previously. A Proofs Omitted from Section 2 A.1 Proof of Observation 2.3: Degree Pruning It is immediate that the degree of every vertex in graph H [ A ] is at least d . We now prove that A isthe unique maximal set with this property at any time. Assume for contradiction that at some timethere is a subset A ′ ⊆ V ( H ) where every vertex in H [ A ′ ] has degree at least d but A ′ A . Denote46 v , . . . , v r } = V ( H ) \ A where the vertices are indexed in the order in which they were removed from A . Then there must be some vertex v ∈ A ′ \ A . Let v i be such a vertex with the smallest index i . Butthen v , . . . , v i − A ′ , so v i must have fewer than d neighbors in A ′ , a contradiction. B Proofs Omitted from Section 3 B.1 Proof of Observation 3.3: Bounding Number of Edges Incident to Layers Fix some index 1 ≤ j ≤ r . In order to define an ( h j ∆)-orientation of E ≥ j , we first define an ordering ρ of the vertices of V ( G ). Consider the following experiment. We run Alg-Maintain-Pruned-Set ( G, h j − )in order to maintain the vertex set A j − , as G undergoes edge deletions. For a vertex v ∈ V ( G ), wedefine its drop time to be the first time in the execution of this algorithm when v did not belong toset A j − ; if no such time exists, then the drop time of v is infinite. Recall that, from Observation 2.4,if the drop time of v is finite and equal to t , then at time t , v had fewer than h j − = ∆ h j neighborsin A j − . We let ρ be the ordering of the vertices of V ( G ) by their drop time, from smallest to largest,breaking ties arbitrarily. Notice that every edge in E ≥ j must have an endpoint with a finite drop time.Consider now some edge e = ( u, v ) ∈ E ≥ j . If u appears before v in the ordering ρ , then we assign thedirection of the edge e to be from u to v ; note that, from the definition of E ≥ j , the drop time of u mustbe finite. This gives a ( h j ∆)-orientation for E ≥ j . It now follows immediately that | E ≥ j | ≤ ∆ h j n .Next, let S j be the set of vertices that join the layer Λ j at any time of the algorithm’s execution.Observe that | S j | ≤ n ≤ j must hold because virtual degrees may only decrease, and so | E ≥ j ( S j ) | ≤ n ≤ j · h j ∆. As the edges whose both endpoints are contained in Λ j at any point of time must belongto E ≥ j ( S j ), the number of such edges is at most n ≤ j h j ∆. We conclude that the number of edges e ,such that, at any time during the algorithm’s execution, both endpoints of e are contained in Λ j is atmost n ≤ j h j ∆. B.2 Existence of Expanding Core Decomposition The goal of this section is to prove the following theorem about the existence of a core decompositionin a high-degree graph. We note that a theorem that is very similar in spirit (but different in the exactdefinitions and parameters) was shown in [CK19], and the proof that we provide uses similar ideas. Theorem B.1 (Expanding Core Decomposition) Let H be an n -vertex simple graph with mini-mum degree at least h . There exists a collection F = { K , . . . , K t } of vertex-disjoint induced subgraphs,called expanding cores or just cores , where t = O (( n log n ) /h ) such that • Each core K ∈ F is a ϕ -expander and deg K ( u ) ≥ ϕh/ for all u ∈ V ( K ) where ϕ = Ω(1 / log n ) .Moreover, K has diameter O ((log n ) /ϕ ) and is ( ϕh/ -edge-connected. • For each vertex u / ∈ S K ∈F V ( K ) , there are at least h/ edge-disjoint paths of length O (log n ) from u to vertices in S K ∈F V ( K ) . Proof: We start with the following two propositions. Proposition B.2 Let G = ( V, E ) be an n -vertex m -edge graph. Then there is a partition V , . . . , V k of V into disjoint sets, such that P ki =1 δ ( V i ) ≤ m/ , and for all ≤ i ≤ k , G [ V i ] is strong ϕ -expanderw.r.t. G where ϕ = Ω(1 / log n ) . roof: The well-known ϕ -expander decomposition (e.g. Observation 1.1. of [SW19]) says that, givenany graph G = ( V, E ) with m edges (possibly with self-loops and multi-edges) and a parameter ϕ , thereexists a partition V , . . . , V k of V such that P ki =1 δ G ( V i ) ≤ O ( ϕm log m ) and G [ V i ] is a ϕ -expander.Let G ′ be obtained from G by adding, for each vertex v , deg G ( v ) self-loops at v . We claim thata ϕ -expander decomposition V ′ , . . . , V ′ k of G ′ where ϕ = Ω(1 / log m ) is indeed the desired strongexpander decomposition for G . This is because, for any set ∅ 6 = S ⊂ V i , we have vol G ′ [ V ′ i ] ( S ) ≥ vol G ( S ) because of the self-loops and δ G ′ [ V i ] ( S ) = δ G [ V i ] ( S ). So we have that δ G [ V ′ i ] ( S )min { vol G ( S ) , vol G ( V ′ i \ S ) } ≥ δ G ′ [ V ′ i ] ( S )min { vol G ′ [ V ′ i ] ( S ) , vol G ′ [ V ′ i ] ( V i \ S ) } ≥ ϕ . That is, G [ V ′ i ] is indeed a strong ϕ -expander with respect to G . Also,for each i , δ G ( V ′ i ) = δ G ′ ( V ′ i ). So we have P ki =1 δ G ( V ′ i ) = P ki =1 δ G ( V ′ i ) ≤ O ( ϕ · (2 m ) log(2 m )) ≤ m/ ϕ = Ω(1 / log m ). Proposition B.3 Let H ′ be an n -vertex graph with minimum degree h ′ . Then there is a collection F ′ of vertex-disjoint induced subgraphs of H ′ that we call cores, such that: • Each core K ∈ F ′ is a ϕ -expander and for all u ∈ V ( K ) , deg K ( u ) ≥ ϕh ′ , where ϕ = Ω(1 / log n ) ;and • P K ∈F ′ | E ( K ) | ≥ | E ( H ′ ) | / . Proof: We apply Proposition B.2 to graph H ′ to obtain a partition ( V , . . . , V k ) of V ( H ′ ). We thenlet F contain all graphs H ′ [ V i ] with | V i | ≥ 2. Notice that, from Proposition B.2, each such graph H ′ [ V i ] is a ϕ -expander. Moreover, from by Observation 2.1, for all u ∈ V i , deg G [ V i ] ( u ) ≥ ϕh ′ Lastly,observe that P K ∈F ′ | E ( K ) | = | E ( H ′ ) | − ( P ki =1 δ H ′ ( V i )) / ≥ | E ( H ′ ) | / The algorithm. We start with F ← ∅ , H ′ ← H , and h ′ ← h/ 3. Let A = Proc-Degree-Pruning ( H ′ , h ′ ).We set H ′ ← H ′ [ A ], so that H ′ has minimum degree at least h ′ . Then, we apply Proposition B.3 to H ′ and obtain the collection F ′ of cores. We set F ← F ∪ F ′ and delete all vertices in S K ∈F ′ V ( K ) from H ′ . Then, we again set A = Proc-Degree-Pruning ( H ′ , h ′ ) and repeat this process until H ′ = ∅ . Let F be the final collection of cores that the algorithm computes. We now prove that it has all requiredproperties. The first guarantee. Proposition B.3 directly guarantees that each core K ∈ F is a ϕ -expander, andmoreover, for all u ∈ V ( K ), deg K ( u ) ≥ ϕh ′ . By the standard ball-growing argument, any ϕ -expanderhas diameter at most O (log( n ) /ϕ ) = O (log n ). Next, to prove that K is ( ϕh ′ )-edge connected, itis enough show that, for any vertex set S ⊆ V ( K ) with vol K ( S ) ≤ vol( K ) / δ K ( S ) ≥ ϕh ′ holds.Observe that, since K is a ϕ -expander, δ K ( S ) ≥ ϕ vol K ( S ) ≥ ϕ h ′ | S | must hold. At the same time,since the minimum degree in K is at least ϕh ′ and K is a simple graph, δ K ( S ) ≥ ϕh ′ | S | − (cid:0) | S | (cid:1) musthold. We now consider two cases. First, if | S | ≥ /ϕ , then ϕ h ′ | S | ≥ ϕh ′ . Otherwise, it can be verifiedthat ϕh ′ | S | − (cid:0) | S | (cid:1) ≥ ϕh ′ for all 1 ≤ | S | < /ϕ . In any case, δ K ( S ) ≥ ϕh ′ . The second guarantee. We denote U = V ( H ) \ S K ∈F V ( K ). Note that v ∈ U only if, for somegraph H ′ that arose over the course of the algorithm, v A , where A = Proc-Degree-Pruning ( H ′ , h ′ ).We say that vertex v was removed when procedure Proc-Degree-Pruning was applied to that graph48 ′ . By orienting edges incident to v towards v whenever v is removed, we can orient all edges of H incident to the vertex set U such that in-deg H ( v ) ≤ h ′ for each v ∈ U . Let −→ H be a directed graphobtained from H by contracting all vertices in S K ∈F V ( K ) into a single vertex t , while keeping theorientation of edges incident to U . Observe that V ( −→ H ) = U ∪ { t } and −→ H is a DAG with t as a singlesink. It is now enough to show that, for every vertex u ∈ U , there are 2 h/ O (log n ) in −→ H from u to t .For any S ⊆ V ( −→ H ), let in-vol −→ H ( S ) = P u ∈ S in-deg −→ H ( u ), out-vol −→ H ( S ) = P u ∈ S out-deg −→ H ( u ), andvol −→ H ( S ) = in-vol −→ H ( S ) + out-vol −→ H ( S ). Observe that, for v ∈ U , out-deg −→ H ( v ) ≥ −→ H ( v ) becausein-deg −→ H ( v ) ≤ h ′ = h/ −→ H ( v ) ≥ h . So, for any S ⊆ U , out-vol −→ H ( S ) ≥ −→ H ( S ).Fix a vertex u ∈ U . Let B d = { v | dist −→ H ( u, v ) ≤ d } . Suppose that B d ⊆ U , then we havevol −→ H ( B d +1 ) = vol −→ H ( B d ) + vol −→ H ( B d +1 \ B d ) ≥ vol −→ H ( B d ) + | E −→ H ( B d , B d +1 \ B d ) | = vol −→ H ( B d ) + | E −→ H ( B d , B d +1 ) | − | E −→ H ( B d , B d ) |≥ vol −→ H ( B d ) + out-vol −→ H ( B d ) − in-vol −→ H ( B d ) ≥ vol −→ H ( B d ) + vol −→ H ( B d ) / / −→ H ( B d )where the last inequality is because out-vol −→ H ( B d ) ≥ −→ H ( B d ). This proves that t ∈ B / n ,otherwise vol −→ H ( B / n ) ≥ (4 / / n ≥ n which is a contradiction. This implies that thereis a directed u - t path P of length O (log n ) in −→ H , but we want to show that there are many suchedge-disjoint paths.Observe that the argument above only exploits the fact that out-deg −→ H ( v ) ≥ −→ H ( v ) for all v ∈ U .So even if we remove edges of a u - t path P from −→ H , this inequality still holds for all v ∈ U \ { u } .As we can assume that in-deg −→ H ( u ) = 0 because in-coming edges to u do not play a role for finding u - t paths, we also have out-deg −→ H ( u ) ≥ −→ H ( u ) = 0. Therefore, we can repeat the argumentout-deg −→ H ( u ) ≥ h − h ′ ≥ h/ h/ u - t paths in −→ H . So we concludethat, for each vertex u ∈ U = V ( H ) \ S K ∈F V ( K ), there are 2 h/ O (log n ) from u to vertices in S K ∈F V ( K ). B.3 Proof of Theorem 3.6: Strong Expander Decomposition We will use the recent almost-linear time determinstic algorithm for computing a (standard) expanderdecomposition by Chuzhoy et al. [CGL + Theorem B.4 (Restatement of Corollary 7.7 from [CGL + There is a deterministic algo-rithm that, given a graph G = ( V, E ) with m edges (possibly with self-loops and parallel edges), aparameter ϕ ∈ (0 , , and a number r ≥ , computes a partition of V into disjoint subsets V , . . . , V k such that P ki =1 δ G ( V i ) ≤ ϕm · (log m ) O ( r ) , and for all ≤ i ≤ k , G [ V i ] is a ϕ -expander. The runningtime of the algorithm is O ( m O (1 /r )+ o (1) · (log m ) O ( r ) ) . We can now complete the proof of Theorem 3.6 using Theorem B.4. Given an input graph G = ( V, E )for Theorem 3.6, we construct a graph G ′ as follows. We start by setting G ′ ← G and, for eachvertex v ∈ V , we add deg G ( v ) self-loops to it in G ′ . We then apply Theorem B.4 to graph G ′ , withparameters ϕ and r = log / m , to obtain a partition of V ( G ′ ) into disjoint subsets V , . . . , V k suchthat P ki =1 δ G ′ ( V i ) ≤ (log | E ( G ′ ) | ) O ( r ) · ϕ · | E ( G ′ ) | , and for all 1 ≤ i ≤ k , G ′ [ V i ] is a ϕ -expander.49irst, observe that for each i , δ G ( V i ) = δ G ′ ( V i ) because G and G ′ differ only by self-loops. So we have P ki =1 δ G ( V i ) = (log | E ( G ′ ) | ) O ( r ) · ϕ · | E ( G ′ ) | ≤ γ ( m ) ϕm .Second, observe that, for any set ∅ 6 = S ( V i , vol G ′ [ V i ] ( S ) ≥ vol G ( S ) because of the self-loops in G ′ ,and δ G ′ [ V i ] ( S ) = δ G [ V i ] ( S ). So we have that δ G [ Vi ] ( S )min { vol G ( S ) , vol G ( V i \ S ) } ≥ δ G ′ [ Vi ] ( S )min { vol G ′ [ Vi ] ( S ) , vol G ′ [ Vi ] ( V i \ S ) } ≥ ϕ .That is, for each i , G [ V i ] is indeed a strong ϕ -expander with respect to G . Therefore, we can simplyreturn the partition { V , . . . , V k } as an output for Theorem 3.6. The running time is O ( m O (1 /r )+ o (1) · (log m ) O ( r ) ) = b O ( m ) by Theorem B.4. B.4 Proof of Theorem 3.8: Embedding Small Expanders In this section we prove Theorem 3.8. The proof uses the cut-matching game , that was introduced byKhandekar, Rao, and Vazirani [KRV09] as part of their fast randomized algorithm for the SparsestCut and Balanced Cut problems. Chuzhoy et al. [CGL + 19] provided an efficient deterministic im-plementation of this game (albeit with weaker parameters), based on a variation of this game due toKhandekar et al. [KKOV07]. We start by describing the variant of the Cut-Matching game that weuse, that is based on the results of [CGL + B.4.1 Deterministic Cut-matching Game The cut-matching game is a game that is played between two players, called the cut player andthe matching player . The game starts with a graph W whose vertex set V has cardinality n , and E ( W ) = ∅ . The game is played in rounds; in each round i , the cut player chooses a partition ( A i , B i )of V with | A i | ≤ | B i | . The matching player then chooses an arbitrary matchings M i that matches every vertex of A i to some vertex of B i . The edges of M i are then added to W i , completing thecurrent round. Intuitively, the game terminates once graph W becomes a ψ -expander, for some givenparameter ψ . It is convenient to think of the cut player’s goal as minimizing the number of rounds,and of the matching player’s goal as making the number of rounds as large as possible. We prove thefollowing theorem, that easily follows from [CGL + Theorem B.5 (Deterministic Algorithm for Cut Player) There is a deterministic algorithm,that, for every round i ≥ , given the graph W that serves as input to the i th round of the cut-matching game, produces, in time O ( nγ ( n )) , a partition ( A i , B i ) of V with | A i | ≤ | B i | , such that,no matter how the matching player plays, after R = O (log n ) rounds, the resulting graph W is a /γ ( n ) -expander. Proof: For the sake of the proof, it is more convenient to work with the notion of sparsity instead of conductance . Definition. (Sparsity) The sparsity Ψ( G ) of a graph G = ( V, E ) is the minimum, over all vertexsets S ⊆ V with ≤ | S | ≤ | V \ S | , of δ ( S ) / | S | . From the definition, it is immediate to see that, if a graph G has maximum degree d , then Φ( G ) ≤ Ψ( G ) ≤ d · Φ( G ). In particular, if Ψ( G ) ≥ ϕ ′ for any parameter ϕ ′ , then G is a ( ϕ ′ /d )-expander.Clearly, for any subgraph H ⊆ G with V ( H ) = V ( G ), Ψ( H ) ≤ Ψ( G ) must hold. We need thefollowing observation: Observation B.6 (Observation 2.3 of [CGL + Let G = ( V, E ) be an n -vertex graph with Ψ( G ) ≥ ψ , and let G ′ be another graph that is obtained from G by adding to it a new set V ′ of t most n vertices, and a matching M connecting every vertex of V ′ to a distinct vertex of G . Then Ψ( G ′ ) = Ω( ψ ) . In order to implement the algorithm of the cut player, we will employ the following algorithm by[CGL + Theorem B.7 (Theorem 1.6 of [CGL + There is a deterministic algorithm, called CutOrCertity ,that, given an n -vertex graph G = ( V, E ) with maximum vertex degree O (log n ) , and a parameter r ≥ ,returns one of the following: • either a cut ( A, B ) in G with n/ ≤ | A | ≤ | B | and | E G ( A, B ) | ≤ n/ ; or • a subset S ⊆ V of at least n/ vertices, such that Ψ( G [ S ]) ≥ / log O ( r ) n .The running time of the algorithm is O ( n O (1 /r ) · (log n ) O ( r ) ) . The following lemma (first proved by [KKOV07]) shows that it is impossible for the cut player toreturn a balanced sparse cut for more than O (log n ) iterations of the cut-matching game. Lemma B.8 (Restatement of Theorem 2.5 of [CGL + There is a constant c , for which thefollowing holds. Consider the cut-matching game where in each iteration ≤ i ≤ c log n , we useAlgorithm CutOrCertity in order to implement the cut player; specifically, if the algorithm returnsa partition ( A i , B i ) of V with | E W ( A i , B i ) | ≤ n/ and n/ ≤ | A i | ≤ | B i | , then we let ( A ′ i , B ′ i ) beany partition of V with A ′ i ⊆ A i and | A ′ i | = | B ′ i | , and we use the partition ( A ′ i , B ′ i ) as the response ofthe cut player in round i . Otherwise, we terminate the cut-matching game. Then no matter how thematching player plays in each iteration, the game will be terminated before reaching iteration ⌊ c log n ⌋ . We are now ready to complete the proof of Theorem B.5. In each iteration 1 ≤ i ≤ O (log n ) of thecut-matching game, we apply algorithm CutOrCertity to graph W i − (where W is the initial graphwith E ( W ) = ∅ ), with the parameter r = O (log / n ). Since the edge set E ( W i − ) is the union of i − W i − it has maximum degree at most i − 1. We will ensure that the numberof rounds is bounded by O (log n ), so graph W i − is a valid input for CutOrCertity .If CutOrCertity returns a cut ( A i , B i ) with E W i − ( A i , B i ) ≤ n/ 100 and n/ ≤ | A i | ≤ | B i | , then weoutput an arbitrary partition ( A ′ i , B ′ i ) of V with A ′ i ⊆ A i and | A ′ i | = | B ′ i | . By Lemma B.8, this canhappen for at most O (log n ) iterations, regardless of the responses of the matching player. Otherwise,if CutOrCertity returns a subset S ⊆ V of at least n/ W i − [ S ]) ≥ / log O ( r ) n , weoutput the partition ( A i , B i ), where A i = V \ S and B i = T . Let M i be the matching returned by thematching player, that matches every vertex of V \ S to a distinct vertex of S . We are then guaranteedthat the graph W i = W i − ∪ M i is a ϕ ′ -expander, where ϕ ′ = 1 / log O ( r ) n = 1 / (log n ) O (log / n ) ≥ γ ( n )(recall that γ ( n ) = exp(log / n ); we have also used the fact that the maximum vertex degree in W i isat most O (log n )). We conclude that W i is a (1 /γ ( n ))-expander.The running time of the algorithm is dominated by Algorithm CutOrCertity , whose running timeis O ( n O (1 /r ) · (log n ) O ( r ) ) = O ( nγ ( n )). This completes the proof of Theorem B.5.In order to complete the proof of Theorem 3.8, we next provide an efficient deterministic algorithmfor the cut player. The idea (that is quite standard) is that, in addition to producing the requiredmatching M i in each round of the game, the cut player will also embed the edges of M i into the graph G , where the embedding paths have a relatively short length and cause a relatively small congestion.51 .4.2 Implementing the Matching Player There are well-known efficient algorithms, that, given a ϕ -expander G = ( V, E ), and any two vertexsubsets A, B ⊆ V , compute a large collection of paths between vertices of A and vertices of B , thatcause congestion ˜ O (1 / poly( ϕ )), such that every path has length ˜ O (1 / poly( ϕ )). We will use such analgorithm in order to implement the matching player. The algorithm is summarized in the followingtheorem, that uses the approach of [CK19, CGL + Theorem B.9 There is a deterministic algorithm, that we call TerminalMatching ( G, A, B, ϕ ) ,that receives as input a parameter ϕ ∈ (0 , , a ϕ -expander G = ( V, E ) with m edges, and two subsets A, B ⊆ V of vertices of G called terminals, with | A | ≤ | B | . The algorithm returns a matching M between vertices of A and vertices of B of cardinality | A | , and a set P of paths of length at most O (log n/ϕ ) each, embedding the matching M into G with edge-congestion O (log n/ϕ ) . The runningtime of the algorithm is ˜ O ( m/ϕ ) . The remainder of this subsection is dedicated to proving Theorem B.9. The proof is almost identicalto that in [CGL + Lemma B.10 There is a deterministic algorithm, that, given an m -edge graph G = ( V, E ) , twodisjoint subsets A ′ , B ′ of its vertices with | A ′ | ≤ | B ′ | , and an integer ℓ ≥ 32 log m , computes one of thefollowing: • either a matching M ′ between vertices of A ′ and vertices of B ′ with | M ′ | ≥ | A ′ | · mℓ , togetherwith a collection P ′ paths of length at most ℓ each that embed M ′ into G with edge-congestion ;or • a cut ( X, Y ) in G with Φ G ( X, Y ) ≤ 24 log m/ℓ .The running time of the algorithm is ˜ O ( ℓ | E ( G ) | ) . Proof: We can assume w.l.o.g. that the graph G is connected, as otherwise we can compute a cut( X, Y ) with conductance zero in time O ( m ). Next, we create an auxiliary graph G st as follows. Westart with graph G , and then add a source vertex s that connects with an edge to every vertex of A ′ ,and a sink vertex t , that connects with an edge to every vertex of B ′ . We then initialize an ES-Tree data structure on graph G s,t , with source vertex s , and distance threshold ℓ + 2. We denote by T the shortest-path tree rooted at s that the data structure maintains. We also initialize P ′ = ∅ and M ′ = ∅ .The algorithm performs iterations, as long as dist G st ( s, t ) ≤ ℓ + 2 and |P ′ | < | A ′ | log mℓ hold.In order to execute an iteration, let P st be the shortest s - t path in T . Observe that path P ′ = P st \{ s, t } is a simple path in graph G , of length at most ℓ , connecting some vertex a ′ ∈ A ′ to some vertex b ′ ∈ B ′ .We delete the edges of P st from G st and update the ES-Tree data structure accordingly. Also, we addthe path P st \ { s, t } to set P ′ and set A ′ ← A ′ \ { a ′ } and B ′ ← B ′ \ { b ′ } . Lastly, we add ( a ′ , b ′ ) to M ,and continue to the next iteration.Notice that the total running time of the algorithm so far is O ( mℓ ), by the guarantees of the ES-Tree data structure.We now consider two cases. First, if, when the above algorithm terminates, |P ′ | = | A ′ | log mℓ holds,then we return the matching M ′ and the paths set P ′ . Clearly, | M ′ | ≥ | A ′ | · mℓ holds, the paths in P ′ are edge-disjoint (so they cause edge-congestion 1), and the length of every path is at most ℓ .52herefore, we assume from now on that, when the above algorithm terminates, |P ′ | < | A ′ | log mℓ holds,and so, from the algorithm description, dist G st ( s, t ) > ℓ + 2. In particular, dist G st ( A ′ , B ′ ) > ℓ nowholds, where dist G st ( A ′ , B ′ ) = min a ′ ∈ A ′ ,b ′ ∈ B ′ dist G st ( a ′ , b ′ ). We use the following standard claim: Claim B.11 There is a deterministic algorithm, that, given an m -edge graph H with two sets S, T ofits vertices, such that dist H ( S, T ) > ℓ for some parameter ℓ , computes a vertex set Z with Φ H ( Z ) < mℓ , and vol H ( Z ) ≤ vol( H ) / , such that either S ⊆ Z or T ⊆ Z hold. The running time of thealgorithm is O ( m ) . Proof: For any vertex set X ⊆ V ( H ) and a parameter d , let Ball H ( X, d ) = { u | dist H ( X, u ) ≤ d } .Note that we are guaranteed that Ball H ( S, ℓ/ ∩ Ball H ( T, ℓ/ 3) = ∅ . Therefore, we can assumew.l.o.g. that vol(Ball H ( S, ℓ/ ≤ vol( H ) / 2. We claim that there must be an index 0 ≤ i ≤ ℓ/ δ (Ball H ( S, i )) ≤ mℓ · vol(Ball H ( S, i )). Indeed, assume otherwise. Then vol(Ball H ( S, ℓ/ ≥ (1 + mℓ ) ℓ/ > vol( H ) / i and the vertex set Z = Ball H ( S, i ) by performing breadth-first search from vertices of S and vertices of T , in time O ( m ).It is now easy to verify that Φ H ( Z ) < mℓ , vol H ( Z ) ≤ vol( H ) / 2, and either S ⊆ Z or T ⊆ Z hold.We are now ready to complete the proof of Lemma B.10. Let H = G st \ { s, t } . We invoke Claim B.11on graph H , with the sets A ′ and B ′ of vertices, and obtain a cut Z . We claim that Φ G ( Z ) < 24 log mℓ .Indeed, let E denote the set of all edges lying on the paths in P ′ , and let E denote the set of all edgesin δ G ( Z ) \ E . From the guarantees of Lemma B.10, | E | ≤ mℓ · vol H ( Z ) ≤ mℓ · vol G ( Z ). Let k denote the cardinality of the set A ′ at the beginning of the algorithm. Since |P ′ | < k log mℓ < k (as wehave assumed that ℓ ≥ 32 log m ), we get that | A ′ | ≥ k/ 2, and in particular, vol G ( Z ) ≥ | A ′ | ≥ k/ G ( Z ) < 24 log mℓ , it is now enough to show that | E | ≤ 16 log mℓ · vol G ( Z ). Indeed, recall that the length of every path in P ′ is bounded by ℓ , and |P ′ | < k log mℓ .Therefore: | E | ≤ ℓ · |P ′ | < k log mℓ . Since we have established that vol G ( Z ) ≥ k/ 2, we get that | E | < 16 log mℓ vol G ( Z ). We conclude that E G ( Z ) ≤ 24 log mℓ vol G ( Z ), as required.As the total running time of the algorithm is bounded by O ( mℓ ), this concludes the proof of Lemma B.10.We obtain the following corollary of Lemma B.10. Corollary B.12 There is a deterministic algorithm, that, given an m -edge graph G = ( V, E ) , twodisjoint subsets A, B of its vertices with | A | ≤ | B | , and a parameter ℓ ≥ 32 log m , computes one of thefollowing: • either a matching M between vertices of A and vertices of B with | M | = | A | , and a collection P of paths of length at most ℓ each, that embeds M into G with congestion at most ℓ ; or • a cut ( X, Y ) in G where Φ G ( X, Y ) ≤ 24 log m/ℓ .The running time of the algorithm is ˜ O ( ℓ m ) . Proof: We start with M = ∅ and P = ∅ , and then iterate. In every iteration, we let A ′ ⊆ A and B ′ ⊆ B be the subsets of vertices that do not participate in the matching M ; since | A | ≤ | B | , we53re guaranteed that | A ′ | ≤ | B ′ | . If A ′ = ∅ , then we terminate the algorithm, and return the currentmatchings M and its embedding P that we have computed. Otherwise, we apply the algorithmfrom Lemma B.10 to graph G and vertex sets A ′ , B ′ . If the outcome is a cut ( X, Y ) in G withΦ G ( X, Y ) ≤ 24 log m/ℓ , then we terminate the algorithm, and return the cut ( X, Y ). Therefore, weassume from now on that, whenever Lemma B.10 is called, it returns a matching M ′ between A ′ and B ′ with | M ′ | ≥ | A ′ | log mℓ , and its corresponding embedding P ′ with congestion 1 and length ℓ . Wethen add the paths in P ′ to P , and we add the matching M ′ to M , and continue to the next iteration.As | M ′ | ≥ | A ′ | log mℓ in every iteration, after ℓ iterations, we must have A ′ = ∅ , and the algorithm willterminate. Notice that the congestion of the final path set P is bounded by the number of iterations, ℓ . Moreover, since the running time of each iteration is ˜ O ( ℓm ), the total running time of the algorithmis ˜ O ( ℓ m ).We are now ready to complete the proof of Theorem B.9. We set ℓ = 32 log m/ϕ , and then run thealgorithm from Corollary B.12 on graph G , with vertex sets A and B . As graph G is a ϕ -expander,the algorithm for Theorem B.9 may never return a cut ( X, Y ) with Φ G ( X, Y ) ≤ 24 log m/ℓ < ϕ .Therefore, the algorithm must return a matching M between the vertices of A and the vertices of B of cardinality | A | , together with its embedding P , whose congestion is ℓ = O (log m/ϕ ), such thatthe length of every path in P is bounded by ℓ = O (log m/ϕ ). The total running time of the algorithmis ˜ O ( ℓ m ) = ˜ O ( m/ϕ ). B.4.3 Completing the Proof of Theorem 3.8. We are now ready to complete the proof of Theorem 3.8. We run the cut-matching game on a graph W whose vertex set is the set T of terminals, and the edge set is initially empty. In every round i of thegame, we use the algorithm from Theorem B.5 to compute a partition ( A i , B i ) of T with | A i | ≤ | B i | ,that we treat as the move of the cut player. Then we apply Algorithm TerminalMatching fromTheorem B.9 to graph G , and the sets A i and B i of vertices. We denote by M i the matching returnedby the algorithm, and by P i its embedding. We then add the edges of M i to graph W and continueto the next iteration. From Theorem B.5, after O (log | T | ) iterations, graph W is guaranteed to bea (1 /γ ( | T | ))-expander. Since the edge set E ( W ) is a union of O (log | T | ) matchings, every vertex of W has degree at most O (log | T | ). We also compute an embedding P = S i P i of W into G , whereevery path in P has length O (log( n ) /ϕ ). Moreover, since each path set P i causes edge congestion atmost O (log ( n ) /ϕ ), the congestion of the embedding P is at most O (log ( n ) /ϕ ). Lastly, it remainsto bound the running time of the algorithm. Recall that the algorithm consists of O (log n ) rounds.In every round we apply the algorithm from Theorem B.5, whose running time is O ( nγ ( n )), andAlgorithm TerminalMatching from Theorem B.9, whose running time is ˜ O ( m/ϕ ). Therefore, thetotal running time of the algorithm is bounded by ˜ O (cid:0) m/ϕ + | T | γ ( | T | ) (cid:1) = ˜ O (cid:0) mγ ( | T | ) /ϕ (cid:1) . Thisconcludes the proof of Theorem 3.8. C Application: Maximum Bounded-Cost Flow In this section, we provide an algorithm for the Maximum Bounded-Cost Flow problem, as the mainapplication of our algorithm for decremental SSSP from Theorem 1.1. The technique is a standardapplication of the multiplicative weight update framework [GK98, Fle00, AHK12]. We provide theproofs for completeness. In [CK19], the same technique was used to provide algorithms for Maximum s - t Flow in vertex-capacitated graphs. We note that the Maximum Bounded-Cost Flow problem issomewhat more general, and it is a useful subroutine for a large number of applications; we discusssome of these applications in Appendix D. We start with some basic definitions.54 efinitions. Given a directed graph G = ( V, E ) and a pair s, t ∈ V of its vertices, an s - t flow is afunction f ∈ R E ≥ , such that, for every vertex v ∈ V − { s, t } , the amount of flow entering v equals theamount of flow leaving v , that is, P ( u,v ) ∈ E f ( u, v ) = P ( v,u ) ∈ E f ( v, u ). Let f ( v ) = P ( u,v ) ∈ E f ( u, v )be the amount of flow at v . The value of the flow f is P ( s,v ) ∈ E f ( s, v ) − P ( v,s ) ∈ E f ( v, s ). Assumefurther that we are given capacities c ∈ ( R > ∪ {∞} ) E and costs b ∈ R E ≥ on edges. A flow f is edge-capacity-feasible if f ( e ) ≤ c ( e ) for all e ∈ E . The cost of the flow f is P e ∈ E b ( e ) f ( e ). If weare given a cost bound b ≥ 0, then we say that f is edge-cost-feasible iff P e ∈ E b ( e ) f ( e ) ≤ b . We candefine capacities and costs on the vertices of G similarly. Let c ∈ ( R > ∪ {∞} ) V be vertex capacitiesand let b ∈ R V ≥ be vertex costs. As before, we say that f is vertex-capacity-feasible if f ( v ) ≤ c ( v )for all v ∈ V − { s, t } , and it is vertex-cost-feasible if P v ∈ V −{ s,t } b ( v ) f ( v ) ≤ b . We may just write capacity-feasible and cost-feasible when clear from context. If G is undirected, one way to define an s - t flow is by treating G as a directed graph, where we replace each undirected edge { u, v } with a pair( u, v ) and ( v, u ) of bi-directed edges. We will assume w.l.o.g. that the flow only traverses the edges inone direction, that is, for each edge { u, v } ∈ E , either f ( u, v ) = 0 or f ( v, u ) = 0.Next, we define the Maximum Bounded-Cost Flow problem ( MBCF ). In the edge-capacitated version,we are given a graph G = ( V, E ) with edge capacities c ∈ ( R > ∪ {∞} ) E and edge costs b ∈ R E ≥ ,together with two special vertices s and t , and a cost bound b . The goal is to compute an s - t flow f of maximum value, such that f is both capacity-feasible and cost-feasible. The vertex-capacitatedversion is defined similarly except that we are given vertex capacities c ∈ ( R > ∪ {∞} ) V and vertexcost b ∈ R V ≥ instead. A (1 + ǫ )-approximate solution for this problem is a flow f which is bothcapacity-feasible and cost-feasible, such that the value of the flow is at least OPT ( b ) / (1 + ǫ ) where OPT ( b ) is the maximum value of a capacity-feasible flow of cost at most b . Connection to the Min-Cost Flow Problem. The classical Min-Cost Flow problem is definedexactly like MBCF , except that, instead of the cost bound, we are given a target flow value τ . Thegoal is to either (i) compute an s - t flow f of value at least τ , such that f is capacity-feasible andhas the smallest cost among all flows satisfying these requirements, or (ii) to certify that there is nocapacity-feasible flow of value at least τ . Let OPT cost ( τ ) be the minimum cost of any capacity-feasibleflow of value at least τ . Observe that an exact algorithm for MBCF implies an exact algorithm for themin-cost flow problem and vice versa, via binary search. Moreover, a (1 + ǫ )-approximation algorithmfor MBCF gives a (1 + ǫ )-factor pseudo-approximation for the Min-Cost Flow problem, that is: weeither find a flow of cost at most OPT cost ( τ ) and value at least τ / (1 + ǫ ), or certify that there is nocapacity-feasible flow of value at least τ . Note that, if we insist that the value of the flow that weobtain in the min-cost flow is at least τ , then the problem is at least as difficult as the exact maximum s - t flow. From now on we focus on the MBCF problem. Our results. We show approximation algorithms for the MBCF problem in undirected graphs forboth edge-capacitated and vertex-capacitated settings, though in the former scenario we only considerunit edge-capacities. Theorem C.1 (Unit-edge capacities) There is a deterministic algorithm that, given a simple undi-rected n -vertex m -edge graph G = ( V, E ) with unit edge capacities c ( e ) = 1 and edge costs b ( e ) > for e ∈ E , together with a source s , a sink t , a cost bound b , and an accuracy parameter < ǫ < . ,computes a (1 + ǫ ) -approximate solution for MBCF in time b O (cid:16) n · log Bǫ O (1) (cid:17) , where B is the ratio oflargest to smallest edge cost. Previously, Cohen et al. [CMSV17] gave an exact algorithm for MBCF with running time ˜ O ( m / · log B ) when the input graph G has unit edge-capacities as well, but G can be directed and is not neces-55arily simple. Lee and Sidford [LS14] showed an exact algorithm with running time ˜ O ( m √ n · log O (1) B )on directed graphs with general edge-capacities. To the best of our knowledge, no faster algorithmsfor MBCF are currently known, even when approximation is allowed. Our algorithm provides a (1 + ǫ )-approximate solution, and only works in simple, undirected graphs with unit edge-capacities. It isfaster than these previously known algorithms when m = ω ( n . o (1) ), and it implies a number ofapplications, as shown in Appendix D.We also show a deterministic algorithm for graphs with (arbitrary) vertex capacities and costs. Theorem C.2 (Vertex capacities) There is a deterministic algorithm that, given an undirected n -vertex graph G = ( V, E ) with vertex capacities c ( v ) > and vertex costs b ( v ) > for all v ∈ V , a source s , a sink t , a cost bound b , and an accuracy parameter < ǫ < . , computes a (1 + ǫ ) -approximate MBCF in time b O (cid:16) n · log( BC ) ǫ O (1) (cid:17) , where B is the ratio of largest to smallest vertex cost, and C is theratio largest to smallest vertex capacity. We note that a randomized algorithm with similar guarantees can be obtained from the algorithm ofChuzhoy and Khanna [CK19] for SSSP , though this was not explicitly noted in their paper (they onlyexplicitly provide an algorithm for approximate max flow). We obtain a deterministic algorithm byusing our deterministic algorithm for SSSP instead of the randomized algorithm of [CK19]. The bestprevious algorithm for vertex-capacitated MBCF , with running time ˜ O ( m √ n log O (1) ( BC )), followsfrom the work of Lee and Sidford [LS14]; the algorithm solves the problem exactly. Our algorithm hasa faster running time when m = ω ( n . o (1) ).The remainder of this section is dedicated to proving Theorem C.1 and Theorem C.2. We start bydescribing, in Appendix C.1, an algorithm for MBCF in general edge-capacitated graphs, based on themultiplicative weight update (MWU) framework, and we bound the number of “augmentations” inthe algorithm. Then, in Appendix C.2, we show how to perform the “augmentations” efficiently whenthe input graph is as in Theorem C.1 and Theorem C.2, using our algorithm for decremental SSSP .We will use the following observation: Remark C.3 It is easy to extend Theorem 1.1 so that the algorithm can handle edges of length , ifwe have a promise that in every query dist-query ( s, v ) or path-query ( s, v ) , the distance from the source s to the query vertex v is non-zero. To do this, let ℓ min and ℓ max be the minimum and the maximum non-zero edge lengths in the graph respectively. For each edge of length , we set the length to be ǫℓ min /n instead. This will not increase the length of any non-zero length path by more than a factor (1 + ǫ ) . Let L ′ = ℓ max ℓ min be the ratio between the original largest to smallest non-zero length. We cannow use Theorem 1.1 with the new bound L = L ′ n/ǫ . C.1 A Multiplicative Weight Update-Based Flow Algorithm We describe an algorithm for computing approximate MBCF in edge-capacitated graphs in Algo-rithm 4. The algorithm is based on the multiplicative weight update (MWU) framework, and it is astraightforward adaptation of the algorithms of Garg and K¨onemann [GK98], Fleischer [Fle00], andMadry [Mad10].Algorithm 4 is stated for both undirected and directed graphs. Let G = ( V, E ) be the input graphand let P s,t be the set of all s - t paths. If G is (un)directed, then P s,t contains all (un)directed s - t paths. We always augment a flow along some s - t path P ∈ P s,t . Let f ( P ) denote the amount of flowthrough path P . We use the shorthand f ( P ) ← f ( P ) + c to indicate that we increase the flow value f ( e ) for all e ∈ E ( P ) by c , that is, f ( e ) ← f ( e ) + c . The algorithm maintains lengths ℓ ∈ R E ≥ on edgesand a parameter ϕ ≥ 0. For any path P , let ℓ ( P ) = P e ∈ P ℓ ( e ) be the length of P and similarly let56 lgorithm 4 An approximate algorithm for max bounded-cost s - t flow in edge-capacitated graphs Input: An undirected or a directed graph G = ( V, E ) with edge capacities c ∈ ( R > ∪ {∞} ) E andedge costs b ∈ R E ≥ , a source s , a sink t , a cost bound b , and an accuracy parameter 0 < ǫ < Output: An s - t flow which is capacity-feasible and cost-feasible.1. Set δ = (2 m ) − /ǫ 2. Set ℓ ( e ) = δ/c ( e ) if c ( e ) is finite; otherwise ℓ ( e ) = 0. Set ϕ = δ/b . Set f ≡ while P e ∈ E c ( e ) ℓ ( e ) + bϕ < do (a) P ← a (1 + ǫ )-approximate ( ℓ + ϕb )-shortest s - t path(b) c ← min { min e ∈ P c ( e ) , b/b ( P ) } (c) f ( P ) ← f ( P ) + c (d) for every edge e ∈ E ( P ), set ℓ ( e ) ← ℓ ( e )(1 + ǫcc ( e ) )(e) ϕ ← ϕ (1 + ǫc · b ( P ) b )4. return the scaled down flow f / log ǫ ( ǫδ ) b ( P ) = P e ∈ P b ( e ) be the cost of P . In general, for any function d ∈ R E ≥ , we let d ( P ) = P e ∈ P d ( e ).We use the shorthand d = ℓ + ϕb to indicate a new edge-length function d ( e ) = ℓ ( e ) + ϕb ( e ) for all e ∈ E . A d -shortest s - t path is an s - t path P ∗ that minimizes d ( P ∗ ) among all paths in P s,t . An α -approximate d -shortest path ˜ P is a path P ∈ P s,t with d ( ˜ P ) ≤ α · d ( P ∗ ). Lemma C.4 The flow f / log ǫ ( ǫδ ) computed by Algorithm 4 is capacity-feasible and cost-feasible. Proof: When the flow on an edge e is increased by an additive amount of a · c ( e ), where 0 ≤ a ≤ ℓ ( e ) is multiplicatively increased by factor (1+ aǫ ) ≥ (1+ ǫ ) a . As ℓ ( e ) = δ/c ( e ) initially and ℓ ( e ) < (1 + ǫ ) /c ( e ) at the end, it grows by the multiplicative factor at most (1 + ǫ ) /δ = (1 + ǫ ) log ǫ ((1+ ǫ ) /δ ) over the course of the algorithm. Therefore, the flow on e is at most c ( e ) · log ǫ ( ǫδ ) before scalingdown , and so f / log ǫ ( ǫδ ) is capacity-feasible. Similarly, every time the cost of the flow increasesby additive amount ab , where 0 ≤ a ≤ 1, the value of ϕ is multiplicatively increased by factor(1 + aǫ ) ≥ (1 + ǫ ) a . As ϕ = δ/b initially and ϕ < (1 + ǫ ) /b at the end, the value of ϕ grows by atmost factor (1 + ǫ ) /δ over the course of the algorithm. Therefore, the cost of the final flow f beforethe scaling down is at most b · log ǫ ( ǫδ ), and so flow f / log ǫ ( ǫδ ) is cost-feasible.Next, we bound the number of augmentation in Algorithm 4, that is, the number of times that the“while” loop is executed. Lemma C.5 1. If graph G has unit edge capacities, and there is an s - t cut of capacity k , then there are at most ˜ O ( k/ǫ ) augmentations. In particular, if G is a simple graph with unit edge-capacities, thenthere are at most ˜ O ( n/ǫ ) augmentations.2. If graph G is has at most k edges with finite capacity, then are at most ˜ O ( k/ǫ ) augmentations. If G is undirected, it can be the case that f ( e ) is even decreased while ℓ ( e ) is increased. This gives even more slackto the analysis. roof: (1) By assumption, there is an s - t cut ( S, S ) with | E ( S, S ) | = k . In each augmentation, either ϕ is increased by factor (1 + ǫ ) or there is some edge e ∈ E ( S, S ), for which ℓ ( e ) is increased by factor(1+ ǫ ). Again, we have ℓ ( e ) = δ/c ( e ) initially and ℓ ( e ) < (1+ ǫ ) /c ( e ) at the end. Also, ϕ = δ/b initiallyand ϕ < (1 + ǫ ) /b at the end. So there can be at most ( k + 1) log ǫ ( ǫδ ) = ˜ O ( k/ǫ ) augmentations.(2) Let E ′ be the set of edges with finite capacity. In each augmentation, either ϕ is increased byfactor (1 + ǫ ) or there is some edge e ∈ E ′ , for which ℓ ( e ) is increased by factor (1 + ǫ ). As before,we start with ℓ ( e ) = δ/c ( e ), and at the end, ℓ ( e ) < (1 + ǫ ) /c ( e ) holds. Similarly, ϕ = δ/b initially and ϕ < (1 + ǫ ) /b holds at the end. Since the total number of edges with finite capacity is at most k , thetotal number of augmentations is bounded by ( k + 1) log ǫ ( ǫδ ) = ˜ O ( k/ǫ ). Lemma C.6 Flow f / log ǫ ( ǫδ ) is a (1 + O ( ǫ )) -approximate solution to the MBCF problem. Proof: For each edge e , let P e ⊆ P s,t be the set of all paths containing e . We first write the standardLP relaxation for MBCF and its dual LP (we can use the same relaxation for both undirected anddirected graphs, except that the set P s,t of paths is defined differently)( P LP )max P P ∈P st f ( P )s.t. P P ∈P e f ( P ) ≤ c ( e ) ∀ e ∈ E P P ∈P s,t b ( P ) · f ( P ) ≤ bx ≥ D LP )min P e ∈ E c ( e ) ℓ ( e ) + bϕ s.t. ℓ ( P ) + ϕb ( P ) ≥ ∀ P ∈ P s,t ℓ, ϕ ≥ D ( ℓ, ϕ ) = P e ∈ E c ( e ) ℓ ( e ) + bϕ , and let α ( ℓ, ϕ ) be the length of the ( ℓ + ϕb )-shortest s - t path.Let ℓ i be the edge-length function ℓ after the i -th execution of the while loop, and let ϕ i be definedsimilarly for ϕ . We denote D ( i ) = D ( ℓ i , ϕ i ) and α ( i ) = α ( ℓ i , ϕ i ) for convenience. We also denote by P i the path found in the i -th iteration and by c i the amount by which the flow on P i is augmented.Observe that: D ( i ) = X e ∈ E c ( e ) ℓ i − ( e ) + bϕ i − + X e ∈ P i c ( e ) · (cid:18) ǫc i c ( e ) · ℓ i − ( e ) (cid:19) + bϕ i − · ǫc i · b ( P i ) b = D ( i − 1) + ǫc i ( ℓ i − ( P i ) + ϕ i − b ( P i )) . Since P i is a (1 + ǫ )-approximate shortest path with respect to α ( i − D ( i ) ≤ D ( i − 1) + ǫ (1 + ǫ ) c i α ( i − . Let β = min ℓ,ϕ D ( ℓ, ϕ ) /α ( ℓ, ϕ ) be the optimal value of the dual LP D LP . Then: D ( i ) ≤ D ( i − 1) + ǫ (1 + ǫ ) c i D ( i − /β ≤ D ( i − · e ǫ (1+ ǫ ) c i /β . If t is the index of the last iteration, then D ( t ) ≥ 1. Since D (0) ≤ δm :1 ≤ D ( t ) ≤ δm · e ǫ (1+ ǫ ) P ti =1 c i /β . ≤ ln(2 δm ) + ǫ (1 + ǫ ) t X i =1 c i /β. (4)Let F = P ti =1 c i . Note that F is exactly the total amount of flow by which we augment over alliterations. Therefore, inequality (4) can be rewritten as: F ǫ (1 + ǫ ) β ≥ ln(1 / (2 δm )) . From Lemma C.4, since the scaled-down flow is a feasible solution for P LP , F/ log ǫ ( ǫδ ) ≤ β musthold. It remains to show that F/ log ǫ ( ǫδ ) ≥ (1 − O ( ǫ )) β : F/ log ǫ ( ǫδ ) β ≥ ln(1 / (2 δm )) ǫ (1 + ǫ ) · ǫ ( ǫδ )= ln(1 /δ ) − ln(2 m ) ǫ (1 + ǫ ) · ln(1 + ǫ )ln( ǫδ ) ≥ (1 − ǫ ) ln(1 /δ ) ǫ (1 + ǫ ) · ln(1 + ǫ )ln( ǫδ ) ≥ − O ( ǫ ) , where the penultimate inequality uses the fact that δ = (2 m ) − /ǫ , so 2 m = (1 /δ ) ǫ , and ln(2 m ) = ǫ ln(1 /δ ), and the last inequality holds because ln(1 + ǫ ) ≥ ǫ − ǫ / ǫδ ) ≤ (1 + ǫ ) ln(1 /δ ). C.2 Efficient Implementation Using Decremental SSSP In this section, we complete the proofs of Theorem C.1 and Theorem C.2 by providing an efficientimplementation of Algorithm 4 from Appendix C.1. The algorithm exploits the algorithm for decre-mental SSSP from Theorem 1.1, that we denote by A . A similar technique was used in [Mad10] andin [CK19]. Our proofs are almost the same as the ones in [CK19], except that we need to take care ofthe cost function b . C.2.1 Simple Graphs with Unit Edge Capacities We start with the proof of Theorem C.1. Let G = ( V, E ) be the input undirected simple graph with n nodes and m edges. We assume that all edge capacities are unit. Let b ∈ R E> be the edge costs, with b max = max e b ( e ), b min = min e b ( e ), and B = b max /b min . Let b be the cost bound. Let δ = (2 m ) − /ǫ bethe same as in Algorithm 4. For every edge e ∈ E , we let its weight be w ( e ) = ℓ ( e ) + ϕb ( e ). We runAlgorithm 4, but we will employ the algorithm A in order to compute (1 + ǫ )-approximate shortest s - t paths in G , with respect to the edge weights w ( e ). In order to do so, we construct another simpleundirected graph G ′ = ( V ′ , E ′ ) as follows. Let K = log (1+ ǫ/ 3) 1+ ǫδ = O (cid:16) log mǫ (cid:17) . Recall that, at thebeginning of the algorithm, for every edge e ∈ E , we set ℓ ( e ) = δ , and we set ϕ = δ/b . Therefore,the initial weight of edge e is w ( e ) = ℓ ( e ) + ϕb ( e ) = δ (1 + b ( e ) /b ). As long as the algorithm does59ot terminate, ℓ ( e ) < ϕ < /b must hold, so w ( e ) < b ( e ) /b . Therefore, over the course ofthe algorithm, w ( e ) may grow from δ (1 + b ( e ) /b ) to at most (1 + b ( e ) /b ). The idea is to discretize allpossible values that edge w ( e ) may attain by powers of (1 + ǫ/ G ′ = ( V ′ , E ′ ). For every vertex v ∈ V , we add ( K +1) vertices v , . . . , v K to V ′ . For every edge e = ( u, v ) ∈ E , we add ( K + 1) edges e , . . . , e K to E ′ , where for each 0 ≤ i ≤ K , e i = ( u i , v i ), and the weight w ′ ( e i ) = δ (1 + b ( e ) /b )(1 + ǫ/ i . Additionally, for each original vertex v ∈ V and index i ∈ { , . . . , K } , we add an edge ( v , v i ) of weight w ′ ( v , v i ) = 0 to E ′ . Note that | V ′ | = O ( nK ) = O (cid:16) n log mǫ (cid:17) .We run the algorithm A from Theorem 1.1 on graph G ′ , where the length of each edge e ′ is w ′ ( e ′ ),and the error parameter is ǫ/ 3. Note that the ratio L of largest to smallest non-zero edge lengthis (1+ b max /b ) δ (1+ b min /b ) ≤ B/δ . Note that some edges of G ′ have length 0. However, as we show later, wewill never ask a query between a pair of vertices that lie within distance 0 from each other, and so,using Remark C.3, we can use algorithm A , except that we need to replace the log L factor in itsrunning time by factor log( Lnǫ ) = log( Bnǫδ ) = O (log( Bn ) /ǫ O (1) ). Therefore, the total update time ofthe algorithm A is b O (( | V ′ | log B ) /ǫ O (1) ) = b O (( n log B ) /ǫ O (1) ).Next, we need to describe the sequence of edge deletions in graph G ′ . The edges are deleted accordingto the following rule. For every edge e = ( u, v ) ∈ E , we delete an edge e i = ( u i , v i ) ∈ E ′ when w ′ ( e i )becomes smaller than ℓ ( e ) + ϕb ( e ). These are the only edge deletions in G ′ . Lastly, we show that,given a (1 + ǫ/ s - t path in G ′ (with respect to edge lengths w ′ ( e ′ )), we canefficiently obtain a (1 + ǫ )-approximate shortest s - t path in G (with respect to edge lengths w ( e )). Claim C.7 At any time before Algorithm 4 terminates, given any (1 + ǫ/ -approximate w ′ -shortest s - t path P ′ in G ′ , we can construct, in time O ( | P ′ | ) , a (1 + ǫ ) -approximate w -shortest s - t path P in G . Proof: Since we assume that Algorithm 4 did not yet terminate, P e ∈ E ℓ ( e ) + bϕ < 1, and so forevery edge e ∈ E , δ (1 + b ( e ) /b ) ≤ ℓ ( e ) + ϕb ( e ) ≤ b ( e ) /b . From the definition of the edge deletionsequence, if i ′ is the smallest index for which the edge e i ′ lies in E ′ , then ℓ ( e ) + ϕb ( e ) ≤ w ( e i ′ ) < ( ℓ ( e ) + ϕb ( e ))(1 + ǫ/ dist denote the distance from s to t in G with respect to edge lengths w ( e ), and let dist ′ denotethe distance from s to t with respect to edge lengths w ′ ( e ′ ). Then dist ≤ dist ′ < dist · (1 + ǫ/ 3) musthold.Assume now that we are given a (1+ ǫ/ s - t in G ′ with respect to edge lengths w ′ ( e ′ ). Then, by contracting every subpath ( v i , v , v j ) ⊆ P of length 0 which corresponds to the samenode v in G , we obtain an s - t path P in G whose length is w ( P ′ ) ≤ (1 + ǫ/ dist ′ < (1 + ǫ/ dist ≤ (1 + ǫ ) dist .Our algorithm only employs query path-query for the vertex t . In particular, it is easy to see thatthe distance from s to t is always non-zero. Therefore, we obtain a correct implementation of Al-gorithm 4. We now analyze its running time. As already observed, the total running time neededto maintain the data structure from Theorem 1.1 is b O (( n log B ) /ǫ O (1) ). Observe that in every iter-ation of Algorithm 4, we employ a single call to path-query ( t ) in graph G ′ . Each such query takes b O ( | P | log log( Ln/ǫ )) = b O ( n log log( B/ǫ )) time to return a path P and, by Lemma C.5, the number ofqueries is bounded by ˜ O ( n/ǫ ). Therefore, the total time needed to respond to all queries is boundedby b O (cid:0) ( n log B ) /ǫ O (1) (cid:1) . The running time of other steps for implementing Algorithm 4, such as main-taining ℓ and ϕ , are subsumed by these bounds. Altogether, the total running time is b O ( n · log Bǫ O (1) ).60 .2.2 Vertex-Capacitated Graphs We now complete the proof Theorem C.2. Our proof is almost identical to that of [CK19]. Let G = ( V, E ) be the input undirected simple graph with n nodes and m edges. Let b ∈ R V> be thevertex costs, with b max = max v b ( v ), b min = min v b ( v ), and B = b max /b min . Let b be the cost bound.Additionally, let c ∈ R V> be the vertex capacities, with c max = max v c ( v ), c min = min v c ( v ), and C = c max /c min .We proceed as follows. First, we use a standard reduction from vertex-capacitated flow problems inundirected graphs to edge-capacitated flow problems in directed graphs, constructing a directed graph G ′′ with capacities on edges. We will run Algorithm 4 on G ′′ . In order to compute approximate ( ℓ + ϕb )-shortest s - t paths in G ′′ , we will employ the algorithm A for decremental SSSP from Theorem 1.1 inanother graph G ′ – a simple undirected edge-weighted graph that we will construct. We now describethe construction of both graphs. Graph G ′′ . We construct a directed graph G ′′ = ( V ′′ , E ′′ ) with edge capacities c ′′ ( e ) and edge costs b ′′ ( e ) for e ∈ E ′′ , using a standard reduction from the input graph G = ( V, E ). The set V ′′ of verticescontains, for every vertex v ∈ V of the original graph, a pair v in , v out of vertices. Additionally, we adda directed edge ( v in , v out ) of capacity c ′′ ( v in , v out ) = c ( v ) and cost b ′′ ( v in , v out ) = b ( v ) to E ′′ . For eachundirected edge ( u, v ) ∈ E , we add a pair of new edges ( v out , u in ) , ( u out , v in ) to E ′′ , both with capacity ∞ and cost 0. This completes the definition of the graph G ′′ , that we view as a flow network withsource s out and destination t in . Observe that for any s - t flow f in G , there is a corresponding s out - t in flow f ′′ in G ′′ , of the same value and cost, such that f is capacity-feasible iff f ′′ is capacity-feasible.Similarly, any s out - t in flow f ′′ in G ′′ can be transformed, in time O ( m ), into an s - t flow of the samevalue and the same cost in G , such that f is capacity-feasible iff f ′′ is capacity-feasible. We runAlgorithm 4 on G ′′ , and maintain edge lengths ℓ ′′ ∈ R E ′′ ≥ and a value ϕ ≥ 0. It now remains to showhow we compute a (1 + ǫ )-approximate ( ℓ ′′ + ϕb ′′ )-shortest s out - t in path in graph G ′′ . In order to doso, we define a new graph G ′ , on which we will run the algorithm A for decremental SSSP .As before, for every edge e ∈ E ′′ , we maintain a weight w ′′ ( e ) = ℓ ′′ ( e ) + ϕb ′′ ( e ). Recall that, at thebeginning of the algorithm, we set ℓ ′′ ( e ) = δ/c ′′ ( e ) if c ′′ ( e ) is finite, and we set ℓ ′′ ( e ) = 0 otherwise.We also set ϕ = δ/b . Therefore, initially, w ′′ ( e ) = δ (1 /c ′′ ( e ) + b ′′ ( e ) /b ) (if c ′′ ( e ) = ∞ , then w ′′ ( e ) = δb ′′ ( e ) /b = 0, and it remains 0 throughout the algorithm). As long as the algorithm does not terminate, ℓ ′′ ( e ) < /c ′′ ( e ) and ϕ < /b must hold, so w ′′ ( e ) < (1 /c ′′ ( e ) + b ′′ ( e ) /b ). Therefore, over the course ofthe algorithm, w ′′ ( e ) may increase from δ (1 /c ′′ ( e ) + b ′′ ( e ) /b ) to (1 /c ′′ ( e ) + b ′′ ( e ) /b ). Graph G ′ . We construct an undirected simple graph G ′ = ( V ′ , E ′ ), from the original input graph G = ( V, E ). We first place weights on the vertices of G ′ , and later turn it into an edge-weighted graph.As before, we let K = log (1+ ǫ/ 3) 1+ ǫδ = O (cid:16) log mǫ (cid:17) . For every vertex v ∈ V − { s, t } , we add K + 1 newvertices v , . . . , v K to V ′ , and for all 0 ≤ i ≤ K , we set the weight w ( v i ) = δ (cid:16) c ( v ) + b ( v ) b (cid:17) · (1 + ǫ/ i .For each edge e = ( u, v ) ∈ E in the original graph, we add ( K + 1) new edges e i,j = ( u i , v j )for all i, j ∈ { , . . . , K } to E ′ . We also add two new vertices s and t to V ′ , with weight 0. Foreach edge e = ( s, u ) ∈ E , for all 0 ≤ i ≤ K , we add an edge e si = ( s, u i ) to E ′ . Similarly, foreach edge e = ( u, t ) ∈ E , for all 0 ≤ i ≤ K , we add an edge e ti = ( u i , t ) to E ′ . Observe that | V ′ | = O ( nK ) = O (cid:16) n log nǫ (cid:17) .We would like to run the algorithm A for decremental SSSP on the graph G ′ . However, G ′ has weightson vertices and not edges. This can be easily fixed as follows. For each edge e = ( u, v ) in G ′ , we letits weight be w ( e ) = ( w ( u ) + w ( v )) / 2. Since w ( s ) = w ( t ) = 0, for every s - t path P ′ in G ′ , the total61eight of all edges on P ′ equals to the total weight of all vertices on P ′ . Note that all edge weightsare non-zero.We run the algorithm A from Theorem 1.1 on graph G ′ , where the length of each edge e is w ( e ), andthe error parameter is ǫ/ L of largest to smallest edge length in G ′ is L = /c min + b max /bδ (1 /c max + b min /b ) ≤ CBδ . ByTheorem 1.1, the total running time of A is b O (cid:16) ( nK ) log( L ) ǫ (cid:17) = b O (cid:16) n · log( CB ) ǫ O (1) (cid:17) .Next, we need to describe the sequence of edge deletions from the graph G ′ . The edges are deletedaccording to the following rule. For every vertex v ∈ V in the original graph, for every 0 ≤ i ≤ K ,whenever w ′′ ( v in , v out ) > w ( v i ), we delete all edges incident to v i from G ′ . For convenience, we saythat vertex v i becomes eliminated . We use the following analogue of Claim C.7. Claim C.8 Throughout the execution of Algorithm 4, given any (1 + ǫ/ -approximate s - t path P ′ in G ′ with respect to edge lengths w ( e ) , we can construct, in time O ( | P ′ | ) , a (1 + ǫ ) -approximate s out - t in path P ′′ , with respect to edge lengths w ′′ ( e ) , in G ′′ . Proof: Let P ′′ be the shortest s out – t in path in graph G ′′ , with respect to the edge lengths w ′′ , andassume that P ′′ = ( s out , v in , v out , . . . , v zin , v zout , t in ). Let W ′′ denote the length of the path P ′′ . For all1 ≤ j ≤ z , let e j = ( v jin , v jout ). Since Algorithm 4 did not yet terminate, w ′′ ( e j ) < /c ′′ ( e j ) + b ′′ ( e j ) /b .Therefore, if we let i j be the smallest index, such that vertex v ji j of G ′ is not yet eliminated, then w ( v ji j ) ≤ w ′′ ( e j )(1 + ǫ/ G ′ : P ′ = ( s, v j , v j , . . . , v zj z ).Since no vertex on this path is eliminated, the path is indeed still contained in G ′ . The total weight ofthe vertices on this path is bounded by (1 + ǫ/ W ′′ . From the above discussion, the total w ′ -weightof the edges on this path is then also bounded by (1 + ǫ/ W ′′ .We denote by dist ′′ the distance from s out to t in in G ′′ , with respect to the edge lengths w ′′ ( e ), andwe denote by dist ′ the distance from s to t in graph G ′ , with respect to edge lengths w ( e ). From theabove discussion, dist ′ ≤ (1 + ǫ/ dist ′′ .Consider now a (1 + ǫ/ s - t path P ′ in graph G ′ , with respect to the edge lengths w ′ ( e ), so the total weight w ′ ( e ) of all edges on P ′ is at most (1 + ǫ/ dist ′ ≤ (1 + ǫ/ dist ′′ ≤ (1 + ǫ ) dist ′′ . Assume that P ′ = ( s, v j , v j , . . . , v zj z ). Consider the following path in graph G ′′ : P =( s out , v in , v out , . . . , v zin , v zout , t in ). Note that for all 1 ≤ j ≤ z , the weight w ′′ ( v jin , v jout ) ≤ w ′ ( v j ) musthold (or vertex v j would have been eliminated). Therefore, the total weight w ′′ ( e ) of the edges of P ′′ is bounded by the total weight w ( v ) of the vertices of P ′ , which in turn is equal to the total weight w ( e ′ ) of the edges of P ′ , that is bounded by (1 + ǫ ) dist ′′ .From the above claim, in every iteration of Algorithm 4, we can use path-query ( t ) in graph G ′ inorder to compute the (1 + ǫ )-approximate shortest s out - t in path in G ′′ , with respect to edge lengths w ′′ = ℓ ′′ + ϕb ′′ . It now remains to analyze the running time of the algorithm. Each query to thedecremental SSSP data structure takes time b O ( | P | log log L ) = b O (( n log( BC ) /ǫ O (1) ) to return a path P and, from Lemma C.5(2), there are at most ˜ O ( n/ǫ ) such queries. Therefore, the total time spenton responding to the queries is b O (cid:16) n log( BC ) ǫ O (1) (cid:17) . As observed above, the total expected running time formaintaining the decremental SSSP data structure is b O (cid:16) n · log( CB ) ǫ O (1) (cid:17) . The running time of other stepsfor implementing Algorithm 4, such as maintaining ℓ ′′ and ϕ , is subsumed by these running times.Overall, the total running time is b O (cid:16) n · log( CB ) ǫ O (1) (cid:17) .62 Additional Applications In this section, we describe additional applications of decremental SSSP , and some new results thatfollow from our algorithm from Theorem 1.1, as well as additional results that could be obtained fromthe algorithm of [CK19]. D.1 Concurrent k -commodity Bounded-Cost Flow In the concurrent k -commodity bounded-cost flow problem, we are given a graph G with capacitiesand costs on either edges or nodes, and a cost bound b . We are also given k demands represented bytuples ( s , t , d ) , . . . , ( s k , t k , d k ), where for all 1 ≤ i ≤ k , s i and t i are vertices of G , that we refer toas a demand pair , and d i is a non-negative real number. The goal is to find a largest value λ > i ∈ { , . . . , k } , an s i - t i flow f i of value λd i (that is, flow f i routes λd i units offlow from s i to t i ). We say that the resulting flow f = S i f i is edge-capacity-feasible, iff for all e ∈ E , P ki =1 f i ( e ) ≤ c ( e ). We say that the flow f is edge-cost-feasible, if P e ∈ E ( P i ≤ k f i ( e )) b ( e ) ≤ b . The goalis to maximize λ , while ensuring that the resulting flow f is both capacity-feasible and cost-feasible.The problem with vertex capacities and cost is defined analogously.The concurrent k -commodity flow problem is defined in the same way, except that we no longer havecosts on edges or vertices, and we do not require that the flow is cost-feasible.We denote by T MBCF ( n, m, ǫ, B, C ) the time needed for computing a (1 + ǫ )-approximate solution foran MBCF problem instance, in a graph with n nodes and m edges, where B is the ratio of largest tosmallest (edge or vertex) costs, C is the ratio of largest to smallest (edge or vertex) capacities. Weuse the following result. Lemma D.1 (Concurrent k -commodity bounded-cost flow [GK96, Fle00]) There is an al-gorithm that, given a graph G with n nodes, m edges, (edge or vertex) capacities c , where C is theratio of largest to smallest capacity, (edge or vertex) costs b , and B is the ratio of largest to smallestcost, and a set of k demands, computes a (1 + ǫ ) -approximate concurrent k -commodity bounded-costflow in time ˜ O (cid:0) kǫ · T MBCF ( n, m, ǫ, BC/δ, C ) · log( BC ) (cid:1) where δ = (2 m ) − /ǫ . Proof: [Sketch] A similar lemma was shown by Garg and K¨onemann in Section 6 of [GK96]. However,the term T MBCF ( n, m, ǫ, BC/δ, C ) in [GK96] was the time for computing exact min-cost flow. Wesketch here why only (1 + ǫ )-approximate solution for MBCF is sufficient.For any commodity 1 ≤ i ≤ k and edge lengths ℓ ∈ R E ≥ , let mincost i ( ℓ ) be minimum cost for sendinga flow of d i units from s i to t i in G = ( V, E, c ), where the edge costs are defined by ℓ . It was shown in[GK96], that, in order to solve concurrent k -commodity bounded-cost flow, it is enough to solve thefollowing problem O ( ǫ k log k log m ) times: given i and ℓ , compute an s i - t i flow f i,ℓ of value d i andcost mincost i ( ℓ ) w.r.t. ℓ .Let A be the (1 + ǫ )-approximate algorithm for MBCF . We claim that, by calling this algorithm O (log BC ) times, we can compute an s i - t i flow f ′ i,ℓ such that the value of f ′ i,ℓ is exactly d i / (1 + ǫ ),and its cost is at most mincost i ( ℓ ). Indeed, observe that, when given mincost i ( ℓ ) as a cost bound to A , algorithm A will return a flow of value at least d i / (1 + ǫ ). By scaling, we obtain a flow of valueexactly d i / (1 + ǫ ) and cost at most mincost i ( ℓ ).By following the analysis of [GK96], it is easy to see that that we can use f ′ i,ℓ instead of f i,ℓ , for anygiven i and ℓ . Every step in the analysis works as it is except that the size of the solution at the endis reduced by factor (1 + ǫ ). 63y plugging Theorem C.1 and Theorem C.2 into the above lemma, we obtain the following corollary: Corollary D.2 There is a deterministic algorithm for computing a (1 + ǫ ) -approximate concurrent k -commodity bounded-cost flow in time b O ( kn BCǫ O (1) ) on either: • undirected simple graphs with unit edge-capacities and arbitrary edge-costs; or • undirected simple graphs with arbitrary vertex-capacities and vertex-costs. Our algorithm for concurrent k -commodity flow is slower than the ˜ O ( mk ) algorithm of Sherman[She17]. However, in the bounded-cost version, our algorithms are faster than the previous bestalgorithms whenever m = ω ( n . o (1) ) and k = O (( m/n ) ); see Table 3. We note that the algorithmfor vertex-capacitated graphs can also be obtained from the results of [CK19], except that the resultingalgorithm would be randomized. D.2 Maximum k -commodity Bounded-Cost Flow In the maximum k -commodity bounded-cost flow, we are given a graph G with capacities and costs oneither edges or nodes, and a cost bound b . We are also given k demand pairs ( s , t ) , . . . , ( s k , t k ). Thegoal is to compute, for all i ∈ { , . . . , k } , an s i - t i flow f i with the following constraints. Let f = S i f i be the resulting k -commodity flow. We say that the flow is edge-capacity-feasible, if P ki =1 f i ( e ) ≤ c ( e )for all e ∈ E , and we say that it is edge-cost-feasible, if P e ∈ E ( P ki =1 f i ( e )) b ( e ) ≤ b . Let | f i | denote thevalue of the flow f i – the amount of flow sent from s i to t i . The goal is to find the flows f , . . . , f k thatmaximize P i | f i | , such that the resulting flow f = S i f i is both edge-capacity-feasible and edge-costfeasible. The problem with vertex capacities and cost is defined similarly.The maximum k -commodity flow problem is defined similarly, except that there are no costs on edgesor vertices, and we do not require that the flow f is cost-feasible.We obtain the following corollary. Corollary D.3 There is a deterministic algorithm, that, given a graph G with n nodes, m edges,capacities c where C is the ratio of largest to smallest capacity, costs b where B is the ratio of largestto smallest cost, and a set of k demand pairs, can compute a (1+ ǫ ) -approximate maximum k -commoditybounded-cost flow in time b O ( kn BCǫ O (1) ) on either: • undirected simple graphs with unit edge-capacities and arbitrary edge-costs; or • undirected simple graphs with arbitrary vertex-capacities and vertex-costs. As before, the result for vertex-capacitated graphs could also be obtained from [CK19], except that theresulting algorithm would be randomized. To our best knowledge, unlike the concurrent k -commodityflow, there is no black-box reduction from maximum k -commodity flow or maximum k -commoditybounded-cost flow to MBCF . However, Corollary D.3 can be proved by extending the MWU-basedtechnique used in Appendix C to the maximum k -commodity bounded-cost flow, and employing thealgorithm for dynamic SSSP for executing each iteration efficiently; we omit the proof.Our algorithm for maximum k -commodity flow is faster than the O ( k O (1) m / /ǫ O (1) )-time algorithm by[KMP12] and the ˜ O ( m /ǫ )-time algorithm by [Fle00] whenever m = ω ( n . o (1) ) and k = O (( m/n ) ).64he same bounds hold in the bounded-cost version and in the vertex-capacitated setting: our algo-rithms are faster than the previous best algorithms whenever m = ω ( n . o (1) ) and k = O (( m/n ) );see Table 3.Lastly, we describe several additional applications of the SSSP problem, that can be obtained fromthe algorithm of [CK19] (as well as from our algorithm from Theorem 1.1) D.3 Most-Balanced Sparsest Vertex Cut Given a graph G = ( V, E ), a vertex cut is a partition ( A, B, C ) of the vertex set V , such that thereare no edges between A and C , and A, C = ∅ . The sparsity of the cut ( A, B, C ) is h G ( A, B, C ) = | B | min {| A | , | C |} + | B | . We say that a cut ( A, B, C ) is ϕ -sparse if h G ( A, B, C ) < ϕ . The most balanced ϕ -sparse cut is a ϕ -sparse cut ( A, B, C ) such that min {| A | , | C |} is maximized. The vertex expansion ofa graph G is h G = min { h G ( A, B, C ) | ( A, B, C ) is a vertex cut of G } .In the α -approximate most-balanced sparsest vertex cut problem, we are given a graph G = ( V, E )and a parameter h G . Let ( A ′ , B ′ , C ′ ) be a most-balanced h G -sparsest cut. The goal is to find a vertexcut ( A, B, C ) with h G ( A, B, C ) ≤ α · h G ( A ′ , B ′ , C ′ ), such that min {| A | , | C |} ≥ min {| A ′ | , | C ′ |} / 3. Thefollowing result follows from the algorithm of [CK19]. Lemma D.4 (Most-balanced sparsest vertex cut) There is a randomized algorithm, that, givena graph G with n nodes, computes a O (log n ) -approximate most-balanced sparsest vertex cut in time O ( T mf ( n, , n ) log n ) where and T mf ( n, ǫ, C ) is the time required to compute a (1 + ǫ ) -approximatemaximum s - t flow and a (1 + ǫ ) -approximate minimum s - t cut in an n -vertex graph with vertexcapacities, where C is the ratio of largest to smallest vertex capacity. The lemma follows from the cut-matching game framework of Khandekar, Rao, and Vazirani [KRV09].The algorithm of [KRV09] is designed to compute a sparsest cut or minimum balanced cut in edge-capacitated graphs, but this is only because it relies on maximum flow / minimum cut computationin edge-capacitated graphs. By computing approximate maximum flow / minimum cut in vertex-capacitated graphs, one can immediately obtain Lemma D.4. By plugging Theorem C.2 into theabove lemma, we obtain the following corollary: Corollary D.5 There is a randomized algorithm for computing a O (log n ) -approximate most-balancedsparsest vertex cut in a given n -vertex graph G , in expected time b O ( n ) . D.4 Treewidth and Tree Decompositions Given a graph G = ( V, E ), a tree decomposition of G consists of a tree T , and, for every vertex a ∈ V ( T ), a subset X a ⊆ V of vertices of G , that satisfy the following conditions: (i) for each edge( u, v ) ∈ E of G , there is a tree-node a ∈ V ( T ) with u, v ∈ X a ; and (ii) for each vertex u ∈ V of G ,all tree-nodes a ∈ V ( T ) with u ∈ X a induce a non-empty connected subgraph of T . The width ofthe tree decomposition is max a ∈ V ( T ) | X a | − 1. The treewidth of G is the minimum width of a treedecomposition of G . Treewidth and tree decomposition are used extensively in algorithmic graphtheory and in fixed parameter tractable (FPT) algorithms.The following lemma reduces the problem of approximating treewidth to the most-balanced sparsestvertex cut problem, using standard techniques. We omit the proof; see also [BGHK95]. Lemma D.6 There is an algorithm that, given an n -vertex graph G and a parameter α , computesa tree decomposition of G of width O ( kα log n ) , where k is the treewidth of G , in time ˜ O ( T svc ( n, α ))65 here T svc ( n, α ) is the time needed for computing an α -approximate most-balanced sparsest vertex cutin G . By plugging Corollary D.5 into the above lemma, we obtain the following corollary: Corollary D.7 There is a deterministic algorithm that, given an n -vertex graph G , computes a treedecomposition of G , of width O ( k log n ) , where k is the treewidth of G , in time b O ( n ) . For comparison, given a graph with n nodes and treewidth k , previous algorithms either have runningtime exponentially depending on k [RS95, Ami01, Ami10, Bod96, BDD + 16] or have a large polynomialrunning time [BGHK95, Ami01, Ami10, FHL08]. One exception is the algorithm by Fomin et al.[FLS + 18] which computes an O ( k )-approximation of treewidth in time O ( k n log n ); see Table 5 fora summary. Our algorithm is faster than [FLS + 18] when k ≥ n / o (1) and also gives a betterapproximation.Although most of fixed parameter tractable (FPT) algorithms only concern graphs with constanttreewidth k = O (1) or very small k , there is a recent line of work on fully-polynomial FPT algo-rithms [FLS + 18, IOO18] for many fundamental graph problems including maximum matching andmax flow, and matrix problems including determinant and solving linear system, which concern in-stances whose treewidth can be polynomial. In those settings, the approximation factor of O (log n )from Corollary D.7 is of interest. 66 Tables Year ( α, β )-approximation Total update time Query timefor dist-query Weighted? Directed? Det?[ES81]* 1981 (1 , O ( mn ) O (1) Directed Det[BHS07]* 2002 (1 , 0) ˜ O ( n ) O (1) Directed[BHS07] 2002 (1 + ǫ, 0) ˜ O ( n √ m/ǫ ) O (1) Directed[RZ12] 2004 (1 + ǫ, 0) ˜ O ( mn/ǫ ) O (1)[FHN16]* 2013 (1 + ǫ, 0) ˜ O ( mn/ǫ ) O (log log n ) Det[Ber16]* 2013 (1 + ǫ, 0) ˜ O ( mn log L/ǫ ) O (1) Weighted Directed[RZ11,FHNS15] 2004 ( α, β ):2 α + β < n − o (1) ) Ω( n − o (1) )[BKS12] (cf.[FG19])* 2008 (2 k − , 0) ˜ O ( m ) ˜ O ( n /k ) Weighted[BvdBG + n, 0) ˜ O ( m ) ˜ O ( n ) Weighted Randomadaptive[BR11] 2011 (2 k − ǫ, O ( n /k + o (1) ) O ( k )[FHN16]* 2013 (2 + ǫ, 0) or(1 + ǫ, 2) ˜ O ( n . /ǫ ) O (1)[ACT14] 2014 (2 O ( kρ ) , O ( mn /k ) O ( kρ )[FHN14a] 2014 ((2 + ǫ ) k − , O ( m /k + o (1) log L ) O ( k k ) Weighted[Che18]* 2018 ((2 + ǫ ) k − , O ( mn /k + o (1) log L ) O (log log( nL )) Weighted Thispaper* (3 · k , γ O ( k ) ) b O ( n . /k γ O ( k ) ) O (log log n ) Det Table 1: Upper and lower bounds for decremental APSP . We denote by n the number of graphvertices, by m the initial number edges, L is the ratio of largest to smallest edge length, and k isa positive integral parameter. We also use parameters ρ = (1 + l log n − /k log( m/n − /k ) m ) ≤ k , 0 < ǫ < γ = exp(log / n ). In the “Year” column, the year is according to the conference version of thepaper. If the algorithm only works for unweighted graphs or only undirected graphs, then we leftthe columns “Weighted?” and “Directed?”, respectively, blank. If the result assumes an obliviousadversary, then we left the column “Det?” blank. Otherwise, we write “Det” or “Random adaptive”which means that the result is deterministic or randomized but works against an adaptive adversary,respectively. The algorithms without the “*” mark are subsumed by other algorithms with the “*”mark, to within n o (1) factors. The fact that the algorithm in [ES81] can be extended to work indirected graphs was observed in [HK95]. The algorithms by [BKS12, FG19, BvdBG + 20] are actuallyfully dynamic algorithms for maintaining spanners, but they automatically imply decremental APSPwith large query time for dist-query . 67 ear Approx. Total updatetime Handles path-query ? Querytime for path-query Weighted? Det? Notes[ES81]* 1981 exact O ( mn ) Yes O ( | P | ) Det[RZ11,FHNS15] 2004 exact Ω( mn − o (1) )[BR11] 2011 1 + ǫ O ( n o (1) ) Yes O ( | P | )[FHN14b] 2014 1 + ǫ O ( n . o (1) + m o (1) ) Yes O ( | P | )[FHN14a]* 2014 1 + ǫ O ( m o (1) log L ) Yes ˜ O ( | P | ) Weighted[BC16] 2016 1 + ǫ ˜ O ( n ) Det[BC17] 2017 1 + ǫ ˜ O ( n / √ m ) Det[Ber17] 2017 1 + ǫ ˜ O ( n log L ) Weighted Det[CK19] 2019 1 + ǫ b O ( n log L ) Yes ˜ O ( n log L ) Weighted Randomadaptive Vertexdeletions only[GWN20] 2020 1 + ǫ b O ( m √ n ) Det[BvdBG + ǫ b O ( m √ n ) Yes ˜ O ( n ) Randomadaptive Thispaper* ǫ b O ( n log L ) Yes b O ( | P | ) Weighted Det Table 2: Upper and lower bounds for decremental SSSP . We denote by n the number of graph vertices,by m the initial number of edges, L is the ratio of largest to smallest edge length, and 0 < ǫ < ǫ is omitted for simplicity. We denote by P the (approximate) shortest path returned in response to path-query , and by | P | the number of edgesin P . In the “Year” column, the year is according to the conference version of the paper. If a resultworks only on unweighted graphs, then we left the column “Weighted?” blank. If a result assumes anoblivious adversary, then we left the column “Det?” blank. Otherwise, we write “Det” or “Randomadaptive” which means that the result is deterministic or randomized but works against an adaptiveadversary, respectively. The algorithms without the “*” mark are subsumed by other algorithms withthe “*” mark, to within poly log n factors. Problems in unitedge-capacity setting Previous best This paper Faster when maximum s - t flow ˜ O ( m/ǫ ) [She17] b O ( n /ǫ O (1) ) - k -commodity concurrent flow ˜ O ( km/ǫ ) [She17] b O ( kn /ǫ O (1) ) -maximum k -commodity flow O ( k O (1) m / /ǫ O (1) ) [KMP12]˜ O ( mn/ǫ ) [Mad10] b O ( kn /ǫ O (1) ) m = ω ( n . o (1) ) and k = O (( m/n ) )maximum bounded-cost s - t flow ˜ O ( m √ n ) [LS14] (exact)˜ O ( m / ) [CMSV17](exact) b O ( n /ǫ O (1) ) m = ω ( n . o (1) ) k -commodity concurrentbounded-cost flow ˜ O ( km √ n/ǫ O (1) ) [LS14]+Lemma D.1˜ O ( m ( m + k ) /ǫ ) [Fle00] b O ( kn /ǫ O (1) ) m = ω ( n . o (1) ) and k = O (( m/n ) )maximum k -commoditybounded-cost flow ˜ O ( m ( m + k ) /ǫ ) [Fle00] b O ( kn /ǫ O (1) ) k = O (( m/n ) ) Table 3: Best currently known running times of algorithms for flow and cut problems in undirectedsimple graphs with unit edge capacities. We use the ˜ O notation to hide factors that are polylogarithmicin n and B – the ratio of maximum to minimum edge cost. All algorithms obtain a (1+ ǫ )-approximationfor the corresponding problem, unless explicitly stated otherwise.68 roblem invertex-capacitated setting Previous best This paper /follows from[CK19] Faster when maximum s - t flow b O ( n /ǫ O (1) ) [CK19]˜ O ( m √ n ) [LS14] (exact) b O ( n /ǫ O (1) ) - k -commodity concurrent flow ˜ O ( km √ n/ǫ O (1) ) [LS14]+Lemma D.1,˜ O ( mn/ǫ ) [Mad10] b O ( kn /ǫ O (1) ) m = ω ( n . o (1) ) and k = O (( m/n ) )maximum k -commodity flow ˜ O ( mn/ǫ ) [Mad10] b O ( kn /ǫ O (1) ) k = O (( m/n ) )maximum bounded-cost s - t flow ˜ O ( m √ n ) [LS14] (exact) b O ( n /ǫ O (1) ) m = ω ( n . o (1) ) k -commodity concurrentbounded-cost flow ˜ O ( km √ n/ǫ O (1) ) [LS14]+Lemma D.1˜ O ( m ( m + k ) /ǫ ) [Fle00] b O ( kn /ǫ O (1) ) m = ω ( n . o (1) ) and k = O (( m/n ) )maximum k -commoditybounded-cost flow ˜ O ( m ( m + k ) /ǫ ) [Fle00] b O ( kn /ǫ O (1) ) k = O (( m/n ) ) O (log n )-approximatesparsest cut b O ( n ) [CK19]˜ O ( m √ n ) [LS14]+Lemma D.4 b O ( n ) - Table 4: Best currently known algorithms for flow and cut problems in undirected graphs with vertexcapacities. We use ˜ O notation to hide factors that are polylogarithmic in n, C and B , where C is theratio of maximum to minimum vertex capacity, and B is the ratio of maximum to minimum vertexcost. All algorithms are for obtaining a (1 + ǫ )-approximation for the corresponding problem, unlessexplicitly stated otherwise. Reference Approximation Time FPT time[RS95] 4 2 O ( k ) n [Ami01, Ami10] 3 O ( k ) n [Bod96] 1 k O ( k ) n [BDD + 16] 3 2 O ( k ) n log n O ( k ) n Polynomial time[BGHK95] O (log n ) poly( n )[Ami01, Ami10] O (log k ) k n polylog( nk )[FHL08] O ( √ log k ) poly( n )[FLS + O ( k ) k n This paper / follows from [CK19] O (log n ) n o (1) Table 5: Algorithms for approximating treewidth of a graph with n nodes and treewidth k . References [ACT14] Ittai Abraham, Shiri Chechik, and Kunal Talwar. Fully dynamic all-pairs shortest paths:Breaking the O (n) barrier. In LIPIcs-Leibniz International Proceedings in Informatics ,volume 28. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2014. 3, 67[AHdLT05] Stephen Alstrup, Jacob Holm, Kristian de Lichtenberg, and Mikkel Thorup. Maintaininginformation in fully dynamic trees with top trees. ACM Trans. Algorithms , 1(2):243–264,2005. 33[AHK12] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method:a meta-algorithm and applications. Theory of Computing , 8(1):121–164, 2012. 5469Ami01] Eyal Amir. Efficient approximation for triangulation of minimum treewidth. In UAI ’01:Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, Universityof Washington, Seattle, Washington, USA, August 2-5, 2001 , pages 7–15, 2001. 66, 69[Ami10] Eyal Amir. Approximation algorithms for treewidth. Algorithmica , 56(4):448–479, 2010.66, 69[AMV20] Kyriakos Axiotis, Aleksander Madry, and Adrian Vladu. Circulation control for fasterminimum cost flow in unit-capacity graphs. arXiv preprint arXiv:2003.04863 , 2020. 3[BC16] Aaron Bernstein and Shiri Chechik. Deterministic decremental single source shortestpaths: beyond the O(mn) bound. In Proceedings of the forty-eighth annual ACM sym-posium on Theory of Computing , pages 389–397. ACM, 2016. 1, 2, 6, 68[BC17] Aaron Bernstein and Shiri Chechik. Deterministic partially dynamic single source short-est paths for sparse graphs. In Proceedings of the Twenty-Eighth Annual ACM-SIAMSymposium on Discrete Algorithms , pages 453–469. SIAM, 2017. 1, 2, 68[BDD + 16] Hans L. Bodlaender, P˚al Grøn˚as Drange, Markus S. Dregi, Fedor V. Fomin, DanielLokshtanov, and Michal Pilipczuk. A ck n 5-approximation algorithm for treewidth. SIAM J. Comput. , 45(2):317–378, 2016. 66, 69[Ber16] Aaron Bernstein. Maintaining shortest paths under deletions in weighted directed graphs. SIAM Journal on Computing , 45(2):548–574, 2016. 2, 3, 67[Ber17] Aaron Bernstein. Deterministic partially dynamic single source shortest paths inweighted graphs. In LIPIcs-Leibniz International Proceedings in Informatics , volume 80.Schloss Dagstuhl-Leibniz-Center for Computer Science, 2017. 1, 2, 3, 36, 38, 68[BGHK95] Hans L. Bodlaender, John R. Gilbert, Hj´almtyr Hafsteinsson, and Ton Kloks. Approx-imating treewidth, pathwidth, frontsize, and shortest elimination tree. J. Algorithms ,18(2):238–255, 1995. 65, 66, 69[BGS20] Aaron Bernstein, Maximilian Probst Gutenberg, and Thatchaphol Saranurak. Deter-ministic decremental reachability, scc, and shortest paths via directed expanders andcongestion balancing. 2020. To appear at FOCS’20. 4, 5, 11, 14[BHI15] Sayan Bhattacharya, Monika Henzinger, and Giuseppe F. Italiano. Deterministic fullydynamic data structures for vertex cover and matching. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego,CA, USA, January 4-6, 2015 , pages 785–804, 2015. 1[BHN16] Sayan Bhattacharya, Monika Henzinger, and Danupon Nanongkai. New deterministicapproximation algorithms for fully dynamic matching. In Proceedings of the 48th AnnualACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA,USA, June 18-21, 2016 , pages 398–411, 2016. 1[BHS07] Surender Baswana, Ramesh Hariharan, and Sandeep Sen. Improved decremental algo-rithms for maintaining transitive closure and all-pairs shortest paths. J. Algorithms ,62(2):74–92, 2007. 3, 67[BK19] Sayan Bhattacharya and Janardhan Kulkarni. Deterministically maintaining a (2 + ǫ )-approximate minimum vertex cover in o (1 /ǫ ) amortized update time. In Proceedings ofthe Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, SanDiego, California, USA, January 6-9, 2019 , pages 1872–1885, 2019. 170BKS12] Surender Baswana, Sumeet Khurana, and Soumojit Sarkar. Fully dynamic randomizedalgorithms for graph spanners. ACM Trans. Algorithms , 8(4):35:1–35:51, 2012. 67[Bod96] Hans L. Bodlaender. A linear-time algorithm for finding tree-decompositions of smalltreewidth. SIAM J. Comput. , 25(6):1305–1317, 1996. 66, 69[BR11] Aaron Bernstein and Liam Roditty. Improved dynamic algorithms for maintaining ap-proximate shortest paths under deletions. In Proceedings of the Twenty-Second AnnualACM-SIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, California,USA, January 23-25, 2011 , pages 1355–1365, 2011. 1, 3, 4, 11, 67, 68[BvdBG + 20] Aaron Bernstein, Jan van den Brand, Maximilian Probst Gutenberg, DanuponNanongkai, Thatchaphol Saranurak, Aaron Sidford, and He Sun. Fully-dynamic graphsparsifiers against an adaptive adversary. CoRR , abs/2004.08432, 2020. 1, 2, 3, 4, 67,68[CGL + 19] Julia Chuzhoy, Yu Gao, Jason Li, Danupon Nanongkai, Richard Peng, and ThatchapholSaranurak. A deterministic algorithm for balanced cut with applications to dynamicconnectivity, flows, and beyond. CoRR , abs/1910.08025, 2019. 1, 11, 13, 49, 50, 51, 52[Che18] Shiri Chechik. Near-optimal approximate decremental all pairs shortest paths. In Proc.of the IEEE 59th Annual Symposium on Foundations of Computer Science , 2018. 3, 4,11, 67[CK19] Julia Chuzhoy and Sanjeev Khanna. A new algorithm for decremental single-sourceshortest paths with applications to vertex-capacitated flow and cut problems. In STOC2019, to appear , 2019. 1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 36, 38, 47, 52, 54, 56, 59, 61, 63, 64,65, 68, 69[CMSV17] Michael B. Cohen, Aleksander Madry, Piotr Sankowski, and Adrian Vladu. Negative-weight shortest paths and unit capacity minimum cost flow in ˜ o ( m / logw ) time. In Pro-ceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms,SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 16-19 , pages 752–771, 2017.55, 68[CQ17] Chandra Chekuri and Kent Quanrud. Approximating the held-karp bound for metricTSP in nearly-linear time. In , pages 789–800, 2017. 1[DHZ00] Dorit Dor, Shay Halperin, and Uri Zwick. All-pairs almost shortest paths. SIAM J.Comput. , 29(5):1740–1759, 2000. 1, 3[Din06] Yefim Dinitz. Dinitz’ algorithm: The original version and Even’s version. In Theoreticalcomputer science , pages 218–240. Springer, 2006. 6[ES81] Shimon Even and Yossi Shiloach. An on-line edge-deletion problem. Journal of the ACM(JACM) , 28(1):1–4, 1981. 2, 3, 6, 67, 68[FG19] Sebastian Forster and Gramoz Goranci. Dynamic low-stretch trees via dynamic low-diameter decompositions. In Proceedings of the 51st Annual ACM SIGACT Symposiumon Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019 , pages377–388, 2019. 67 71FHL08] U. Feige, M.T. Hajiaghayi, and J.R. Lee. Improved approximation algorithms for min-imum weight vertex separators. SIAM Journal on Computing , 38:629–657, 2008. 66,69[FHN14a] Sebastian Forster, Monika Henzinger, and Danupon Nanongkai. Decremental single-source shortest paths on undirected graphs in near-linear total update time. In , pages 146–155, 2014. 1, 3, 4, 11, 67, 68[FHN14b] Sebastian Forster, Monika Henzinger, and Danupon Nanongkai. A subquadratic-timealgorithm for decremental single-source shortest paths. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland,Oregon, USA, January 5-7, 2014 , pages 1053–1072, 2014. 1, 68[FHN16] Sebastian Forster, Monika Henzinger, and Danupon Nanongkai. Dynamic approximateall-pairs shortest paths: Breaking the O(mn) barrier and derandomization. SIAM Jour-nal on Computing , 45(3):947–1006, 2016. Announced at FOCS’13. 1, 3, 4, 11, 40, 67[FHNS15] Sebastian Forster, Monika Henzinger, Danupon Nanongkai, and Thatchaphol Saranurak.Unifying and strengthening hardness for dynamic problems via the online matrix-vectormultiplication conjecture. In Proceedings of the Forty-Seventh Annual ACM on Sym-posium on Theory of Computing, STOC 2015, Portland, OR, USA, June 14-17, 2015 ,pages 21–30, 2015. 1, 2, 3, 67, 68[Fle00] Lisa Fleischer. Approximating fractional multicommodity flow independent of the num-ber of commodities. SIAM J. Discrete Math. , 13(4):505–520, 2000. 54, 56, 63, 64, 68,69[FLS + 18] Fedor V. Fomin, Daniel Lokshtanov, Saket Saurabh, Michal Pilipczuk, and MarcinWrochna. Fully polynomial-time parameterized computations for graphs and matricesof low treewidth. ACM Trans. Algorithms , 14(3):34:1–34:45, 2018. 66, 69[GK96] M. Goemans and J. Kleinberg. An improved approximation ratio for the minimumlatency problem. Proceedings of the ACM-SIAM Symposium on Discrete Algorithms ,1996. 63[GK98] Naveen Garg and Jochen K¨onemann. Faster and simpler algorithms for multicommodityflow and other fractional packing problems. In ,pages 300–309, 1998. 54, 56[GWN20] Maximilian Probst Gutenberg and Christian Wulff-Nilsen. Deterministic algorithms fordecremental approximate shortest paths: Faster and simpler. In Proceedings of theFourteenth Annual ACM-SIAM Symposium on Discrete Algorithms , pages 2522–2541.SIAM, 2020. 1, 2, 3, 11, 40, 68[HdLT01] Jacob Holm, Kristian de Lichtenberg, and Mikkel Thorup. Poly-logarithmic determin-istic fully-dynamic algorithms for connectivity, minimum spanning tree, 2-edge, andbiconnectivity. J. ACM , 48(4):723–760, July 2001. 3, 6, 33[HK95] Monika Rauch Henzinger and Valerie King. Fully dynamic biconnectivity and tran-sitive closure. In Foundations of Computer Science, 1995. Proceedings., 36th AnnualSymposium on , pages 664–672. IEEE, 1995. 6, 6772IOO18] Yoichi Iwata, Tomoaki Ogasawara, and Naoto Ohsaka. On the power of tree-depthfor fully polynomial FPT algorithms. In , pages41:1–41:14, 2018. 66[KKOV07] Rohit Khandekar, Subhash Khot, Lorenzo Orecchia, and Nisheeth K Vishnoi. On acut-matching game for the sparsest cut problem. Univ. California, Berkeley, CA, USA,Tech. Rep. UCB/EECS-2007-177 , 6(7):12, 2007. 50, 51[KMP12] Jonathan A. Kelner, Gary L. Miller, and Richard Peng. Faster approximate multicom-modity flow using quadratically coupled flows. In Proceedings of the 44th Symposiumon Theory of Computing Conference, STOC 2012, New York, NY, USA, May 19 - 22,2012 , pages 1–18, 2012. 64, 68[KRV09] Rohit Khandekar, Satish Rao, and Umesh Vazirani. Graph partitioning using singlecommodity flows. Journal of the ACM (JACM) , 56(4):19, 2009. 50, 65[KT19] Ken-ichi Kawarabayashi and Mikkel Thorup. Deterministic edge connectivity in near-linear time. J. ACM , 66(1):4:1–4:50, 2019. 19[LS14] Yin Tat Lee and Aaron Sidford. Path finding methods for linear programming: Solvinglinear programs in ˜o(vrank) iterations and faster algorithms for maximum flow. In , pages 424–433, 2014. 3, 56, 68, 69[Mad10] Aleksander Madry. Faster approximation schemes for fractional multicommodity flowproblems via dynamic graph algorithms. In Proceedings of the 42nd ACM Symposiumon Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA, 5-8 June 2010 ,pages 121–130, 2010. 1, 56, 59, 68, 69[NS17] Danupon Nanongkai and Thatchaphol Saranurak. Dynamic spanning forest with worst-case update time: adaptive, las vegas, and O(n1/2 - ǫ )-time. In Proceedings of the 49thAnnual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal,QC, Canada, June 19-23, 2017 , pages 1122–1129, 2017. 1, 5[NSW17] Danupon Nanongkai, Thatchaphol Saranurak, and Christian Wulff-Nilsen. Dynamicminimum spanning forest with subpolynomial worst-case update time. In , pages 950–961, 2017. 1, 5[RS95] Neil Robertson and Paul D Seymour. Graph minors. XIII. the disjoint paths problem. Journal of Combinatorial Theory, Series B , 63(1):65–110, 1995. 66, 69[RZ11] Liam Roditty and Uri Zwick. On dynamic shortest paths problems. Algorithmica ,61(2):389–401, 2011. 1, 3, 67, 68[RZ12] Liam Roditty and Uri Zwick. Dynamic approximate all-pairs shortest paths in undirectedgraphs. SIAM Journal on Computing , 41(3):670–683, 2012. 3, 67[San05] Piotr Sankowski. Subquadratic algorithm for dynamic shortest distances. In Interna-tional Computing and Combinatorics Conference , pages 461–470. Springer, 2005. 273She17] Jonah Sherman. Area-convexity, l ∞ regularization, and undirected multicommodityflow. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Com-puting, STOC 2017, Montreal, QC, Canada, June 19-23, 2017 , pages 452–460, 2017. 64,68[ST83] Daniel Dominic Sleator and Robert Endre Tarjan. A data structure for dynamic trees. J. Comput. Syst. Sci. , 26(3):362–391, 1983. 1[SW19] Thatchaphol Saranurak and Di Wang. Expander decomposition and pruning: Faster,stronger, and simpler. In Proceedings of the Thirtieth Annual ACM-SIAM Symposiumon Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019 ,pages 2616–2635, 2019. 5, 10, 13, 48[TZ01] M. Thorup and U. Zwick. Approximate distance oracles. Annual ACM Symposium onTheory of Computing , 2001. 3[vdBNS19] Jan van den Brand, Danupon Nanongkai, and Thatchaphol Saranurak. Dynamic ma-trix inverse: Improved algorithms and matching conditional lower bounds. In , pages 456–480, 2019. 2[Waj20] David Wajc. Rounding dynamic matchings against an adaptive adversary. In Proccedingsof the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020,Chicago, IL, USA, June 22-26, 2020 , pages 194–207, 2020. 1[WN17] Christian Wulff-Nilsen. Fully-dynamic minimum spanning forest with improved worst-case update time. In Proceedings of the 49th Annual ACM SIGACT Symposium onTheory of Computing , pages 1130–1143. ACM, 2017. Full version at arXiv:1611.02864.1[Zwi98] Uri Zwick. All pairs shortest paths in weighted directed graphs-exact and almost exactalgorithms. In