Deterministic Decremental SSSP and Approximate Min-Cost Flow in Almost-Linear Time
Aaron Bernstein, Maximilian Probst Gutenberg, Thatchaphol Saranurak
DDeterministic Decremental SSSP and Approximate Min-Cost Flowin Almost-Linear Time
Aaron Bernstein ∗ Rutgers University New Brunswick, [email protected] Maximilian Probst Gutenberg † ETH Zurich, [email protected] Saranurak ‡ University of Michigan, [email protected]
Abstract
In the decremental single-source shortest paths problem, the goal is to maintain distancesfrom a fixed source s to every vertex v in an m -edge graph undergoing edge deletions. Inthis paper, we conclude a long line of research on this problem by showing a near-optimal deterministic data structure that maintains (1 + (cid:15) )-approximate distance estimates and runs in m o (1) total update time.Our result, in particular, removes the oblivious adversary assumption required by the pre-vious breakthrough result by Henzinger et al. [FOCS’14], which leads to our second result: thefirst almost-linear time algorithm for (1 − (cid:15) )-approximate min-cost flow in undirected graphswhere capacities and costs can be taken over edges and vertices. Previously, algorithms for maxflow with vertex capacities, or min-cost flow with any capacities required super-linear time. Ourresult essentially completes the picture for approximate flow in undirected graphs.The key technique of the first result is a novel framework that allows us to treat low-diametergraphs like expanders. This allows us to harness expander properties while bypassing shortcom-ings of expander decomposition, which almost all previous expander-based algorithms neededto deal with. For the second result, we break the notorious flow-decomposition barrier from themultiplicative-weight-update framework using randomization. ∗ Supported by NSF CAREER Grant 1942010. † The author is supported by a start-up grant of Rasmus Kyng at ETH Zurich. Work was partially done while atthe University of Copenhagen where the author was supported by Basic Algorithms Research Copenhagen (BARC),supported by Thorup’s Investigator Grant from the Villum Foundation under Grant No. 16582. ‡ Work was partially done while at Toyota Technological Institute at Chicago. a r X i v : . [ c s . D S ] J a n ontents I Extended Abstract 1
I.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1I.2 Overview for Part II: Dynamic Shortest Paths . . . . . . . . . . . . . . . . . . . . . . 6I.3 Overview of Part IV: Static Min-Cost Flow . . . . . . . . . . . . . . . . . . . . . . . 15I.4 Overview of Part III: Threshold-Subpath Queries . . . . . . . . . . . . . . . . . . . . 19
II Distance-only Dynamic Shortest Paths 21
II.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21II.2 Main Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24II.3 Implementing Robust Cores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28II.4 Implementing Covering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37II.5 Implementing Approximate Balls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40II.6 Putting Distance-Only Components Together . . . . . . . . . . . . . . . . . . . . . . 50
III Path-reporting Dynamic Shortest Paths 55
III.1 Preliminaries on Path-reporting Data Structures . . . . . . . . . . . . . . . . . . . . 56III.2 Main Path-Reporting Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56III.3 Implementing Path-reporting Approximate Balls . . . . . . . . . . . . . . . . . . . . 59III.4 Implementing Path-reporting Robust Cores . . . . . . . . . . . . . . . . . . . . . . . 65III.5 Putting Path-reporting Components Together . . . . . . . . . . . . . . . . . . . . . . 72
IV Approximate Min-Cost Flow 79
IV.1 Additional Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80IV.2 A Roadmap to the Reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82IV.3 Near-pseudo-optimal MBCF via Path-reporting Decremental SSSP . . . . . . . . . . 83IV.4 Near-capacity-fitted instance via Near-pseudo-optimal MBCF . . . . . . . . . . . . . 96IV.5 Near-Optimal MBCF via Near-pseudo-optimal MBCF in a Near-capacity-fitted in-stance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99IV.6 Putting it all Together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
A Appendix 103
A.1 Appendix of Part I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103A.2 Appendix of Part II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108A.3 Appendix of Part III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114A.4 Appendix of Part IV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
B Bibliography 122 art I: Extended Abstract
I.1 Introduction
One of the most fundamental problems in graph algorithms is the single-source shortest paths(SSSP) problem where given a source vertex s and a undirected, weighted graph G = ( V, E, w )with n = | V | , m = | E | , we want to find the shortest paths from s to every vertex in the graph. Thisproblem has been studied since the 1950s [Shi54, D +
59] and can be solved in linear time [Tho99].A natural extension of SSSP is to consider a dynamic graph G that is changing over time. Themost natural model is the fully dynamic one, where edges can be inserted and deleted from G .Unfortunately, recent progress on conditional lower bounds [AW14, HKNS15, GWW20] essentiallyrules out any fully dynamic algorithm with small update and query times for maintaining distancesfrom s . For this reason, most research has focused on the decremental setting, where the graph G only undergoes edge deletions. In addition to being a natural relaxation of the fully dynamicmodel, the decremental setting is extremely well-motivated for the SSSP problem in particular: afast data structure for decremental SSSP can be used as a subroutine within the multiplicativeweighted update (MWU) framework to speed up algorithms for various (static) flow problems.Our main contribution is an almost-optimal data structure for decremental SSSP, which we inturn use to develop the first almost-optimal algorithms for approximate vertex-capacitated maxflow and min-cost flow. I.1.1 Previous Work
For our discussion of related work, we assume for (1 + (cid:15) )-approximations that (cid:15) > O - and b O -notation to suppress logarithmic and subpolynomial factorsin n , respectively. We include a broader discussion of related work in Appendix A.1.1. Decremental Single-Source Shortest Paths (SSSP).
A seminal result for decremental SSSPis an algorithm by Even and Shiloach [ES81] with total update time O ( mn ) over the entire sequenceof updates in unweighted graphs. Conditional lower bounds indicate that this is near-optimal[RZ04, AW14, HKNS15, GWW20]. But Bernstein and Roditty showed [BR11] that there existfaster algorithms if one allows for a (1 + (cid:15) )-approximation on the distances (and the correspondingshortest paths). This line of research culminated in a breakthrough result by Forster, Henzingerand Nanongkai [HKN14a] (see also [LN20]) who showed how to maintain (1 + (cid:15) )-approximate SSSPin total update time b O ( m · polylog( W )), where W is the maximum weight ratio. Towards Efficient Adaptive Data Structures.
Although it has near-optimal update time,the b O ( m ) result of [HKN14a] suffers from a crucial shortcoming: it is randomized and only worksagainst an oblivious adversary, i.e. an adversary that fixes the entire sequence in advance. For this1eason, the result of [HKN14a] cannot be used as a black-box data structure, and in particularcannot be incorporated into the MWU framework for flow algorithms mentioned above.Over the last years, there has been significant effort towards designing adaptive , or even betterdeterministic, algorithms with comparable update time guarantees [BC16, BC16, Ber17, CK19,GWN20, BBG +
20, CS20]. But the best total update time remains b O (min { m √ n, n } polylog( W )). Max flow and Min-cost Flow.
Max flow and min-cost flow problems have been studied exten-sively since the 1950s [Dan51, FJF56, Din70, GT88, GR98, DS08, Mad13, LS14, CMSV17, LS20]and can be solved exactly in time ˜ O (( m + n . ) log ( U C )) [vdBLL +
20] and, for unit-capacity graphs, b O ( m / log( C )) [AMV20] where U is the maximum capacity ratio and C is the maximum cost ra-tio. Although enormous effort has been directed towards these fundamental problems, in directedsparse graphs, the fastest algorithms are still far from achieving almost-linear time.Therefore, an exciting line of work [CKM +
11, LRS13, She13, KLOS14, RST14, Pen16] emergedwith the goal of obtaining faster approximation algorithms on undirected graphs. This culminatedin ˜ O ( m · polylog( U ))-time algorithms for (1 + (cid:15) )-approximate max flow [She13, KLOS14, Pen16]and ˜ O ( m · polylog( C ))-time algorithms for min-cost flow when all capacities are infinite [She17a,Li20, ASZ20], both of which require only near-linear time. Limitations of Existing Approaches.
Unfortunately, none of the near-linear-time algorithmsabove handle vertex capacities or can be generalized to min-cost flow with finite capacities. Thisseverely limits the range of applications of these algorithms.This limitation seems inherent to the existing algorithms. The most successful approach for ap-proximate max flow [She13, KLOS14] is based on obtaining fast n o (1) -competitive oblivious routingschemes for the ‘ ∞ -norm (or ‘ -norm in the case of [She17b]). But for both oblivious routing invertex-capacitated graphs [HKRL07] and min-cost flow oblivious routing [ABD +
06, GHZ20] thereare lower bounds of Ω( √ n ) for the possible competitiveness. This would lead to an additional poly-nomial overhead for these algorithms. There are also some alternative approaches to flow problems,but currently they do not lead to almost-linear time algorithms even for regular edge-capacitatedmax-flow (see e.g. [CKM +
11, LRS13, KPSW19]).
Max Flow and Min-Cost Flow via MWU and Decremental SSSP.
In order to overcomelimitations in the previous approaches, a line of attack emerged that was originally suggested by[Mad10] and was recently reignited by Chuzhoy and Khanna [CK19]. The idea is that the MWUframework for solving min-cost flow (see e.g. [GK07, Fle00]) can be sped up with a fast adaptive decremental SSSP data structure. In [CK19], Chuzhoy and Khanna obtained promising results viathis approach: an algorithm for max flow with vertex capacities only in b O ( n polylog( U )) time. Butthis approach currently has two major challenges towards an b O ( m ) time algorithm:• Obtaining a fast adaptive decremental SSSP data structure has proven to be an extremelydifficult challenge that even considerable effort could not previously resolve [BC16, BC16,Ber17, CK19, GWN20, BBG +
20, CS20].• Even given such a data structure, the MWU framework is designed to successively route flowsalong paths from a source s to a sink t . But this implies that the flow decomposition barrier applies to the MWU framework, which might have to send flow on Ω( mn ) edges over thecourse of the algorithm (or Ω( n ) edges when only vertex capacities are present).In this article, we overcome both challenges and complete this line of work. By min-cost flow oblivious routing, we mean an oblivious routing scheme that is competitive at the same timewith the best routing in terms of ‘ -norm and ‘ ∞ -norm, respectively. .1.2 Our Results Decremental SSSP.
Our main result is the first deterministic data structure for the decrementalSSSP problem in undirected graph with almost-optimal total update time.
Theorem I.1.1 (Decremental SSSP) . Given an undirected, decremental graph G = ( V, E, w ) , afixed source vertex s ∈ V , and any (cid:15) > / polylog( n ) , we give a deterministic data structure thatmaintains a (1 + (cid:15) ) -approximation of the distance from s to every vertex t in V explicitly in totalupdate time m o (1) polylog W . The data structure can further answers queries for an (1 + (cid:15) ) -approximate shortest s -to- t path π ( s, t ) in time | π ( s, t ) | n o (1) . This result improves upon the state-of-the-art b O (min { m √ n, n } polylog( W )) total update timetime in the deterministic (or even adaptive) setting and resolves the central open problem in thisline of research. Mixed-Capacitated Min-Cost Flow.
Given our new deterministic SSSP data structure, itis rather straight-forward using MWU-based techniques from [Fle00, GK07, CS20] to obtain unit-capacity min-cost flow in almost-linear time. We are able to generalize these techniques significantlyto work for arbitrary vertex and edge capacities.
Theorem I.1.2 (Approximate Mixed-Capacitated Min-Cost Flow) . For any (cid:15) > / polylog( n ) ,consider undirected graph G = ( V, E, c, u ) , where cost function c and capacity function u map eachedge and vertex to a non-negative real. Let s, t ∈ V be source and sink vertices. Then, there is analgorithm that in m o (1) log log C time returns a feasible flow f that sends a (1 − (cid:15) ) -fraction ofthe max flow value from s to t with cost at most equal to the min-cost flow. The algorithm runscorrectly with high probability.
Our result resolves one of the three key challenges for the max flow/ min-cost problem accordingto a recent survey by Madry [Mąd18]. The state-of-the-art for this problem [vdBLL +
20] solvedthe exact version of this problem in directed graphs and hence obtains significantly slower runningtime ˜ O (( m + n . ) · polylog( U C )) which is still super-linear in sparse graphs.
I.1.3 Applications
Our two main results have implications for a large number of interesting algorithmic problems. SeeAppendix A.1.3 for a more detailed statements and a discussion of how to obtain the results below.
Applications of Mixed-Capacitated Min-Cost Flow. • Using a reduction of [KRV09], our result for vertex-capacitated flow yields a O (log ( n ))approximation to sparsest vertex cut in undirected graphs in b O ( m ) times. This is the firstalmost-linear-time algorithm for the problem with polylog( n ) approximation.• Combined with another reduction in [BGHK95], our result for sparsest vertex cut yieldsan O (log ( n ))-approximate algorithm for computing tree-width (and the corresponding treedecomposition) in b O ( m ) time. This is again the first almost-linear-time algorithm withpolylog( n ) approximation, except for the special cases where the tree-width is itself sub-polynomial [FLS +
18] or the graph is extremely dense [CS20]. (See other work on computingtree-width in [RS95, Bod96, AMI01, Ami10, BDD +
16, BGHK95, AMI01, Ami10, FHL08,CK19].) We can also route an arbitrary demand vector, see an alternative statement in Appendix A.1.2. We point out, however, that our dependency on (cid:15) is significantly worse than formulated in [Mąd18]. b O ( m · tw( G A ) log(1 /(cid:15) )), improving upon the previous dependency of tw( G A ) .• Given any graph G = ( V, E ) (with associated incidence matrix B ), (cid:15) > / polylog( n ), ademand vector χ ∈ R n , (super)-linear functions c e , c v : R ≥ → R ≥ for each e ∈ E and v ∈ V .Let f ∗ be some flow minimizingmin B > f = χ c ( f ) = X e ∈ E c e ( | f e | ) + X v ∈ V c v (( B > | f | ) v ) . Then, we can compute a (1 + (cid:15) ) approximate flow f with c ( f ) ≤ (1 + (cid:15) ) c ( f ∗ ) that routesdemand χ in almost-linear time. In particular, this is the first almost-linear time algorithmfor flow in the weighted p -norm k W − f k p (since we can minimize k W − f k pp by c e ( x ) = ( xw e ) p ). Applications of Decremental SSSP.
There is currently a large gap between the best-knowndynamic graph algorithms against oblivious adversaries and adaptive ones. Much of this gap stemsfrom the problem of finding a deterministic counterpart to picking a random source. Plugging in,either our decremental SSSP as a black-box subroutine or our some techniques that we obtain alongthe way, we obtain various new adaptive algorithms:• Decremental (1+ (cid:15) )-approximate all-pairs shortest paths (APSP) in total update time b O ( mn ).(Previous adaptive results only worked in unweighted graphs [HKN16, GWN20].)• Decremental b O (1)-approximate APSP with total update time b O ( m ). Even in unweightedgraphs, all previously adaptive algorithms for decremental APSP (for any approximation)had total update time at least Ω( n ) [HKN16, GWN20, CS20, EFGW20]; for weighted graphsthey were even slower. Our result is analogous to the oblivious algorithm of Chechik, thoughshe achieves a stronger O (log( n ))-approximation [Che18].• Fully-dynamic (2 + (cid:15) ) approximate all-pairs shortest paths with b O ( m ) update time, matchingthe oblivious result of [Ber09]. I.1.4 Technical Contributions
From a technical perspective, our dynamic SSSP result in Theorem I.1.1 is by far our more signif-icant contribution. It requires several new ideas, but we would like to highlight one technique inparticular that is of independent interest and might have applications far beyond our result:
Key Technique: Converting any Low-Diameter Graph into an Expander
Several recentpapers on dynamic graph algorithms start with the observation that many problems are easy tosolve if the underlying graph G is an expander, as one can then apply powerful tools such asexpander pruning and flow-based expander embeddings. All of these papers then generalize theirresults to arbitrary graphs by using expander decomposition: they decompose G into expandersubgraphs and then apply expander tools separately to each subgraph. Unfortunately, expanderdecomposition necessarily involves a large number of crossing edges (or separator vertices) thatdo not belong to any expander subgraph and need to be processed separately. This difficulty hasbeen especially prominent for decremental shortest paths, where expander-based algorithms hadpreviously been unable to achieve near-linear update time [CK19, BPGS20, CS20, BBG + apply expander-based tools without resortingto expander decomposition . In a nut-shell, we show that given any low-diameter graph G , one can4n almost-linear time compute a capacity κ ( v ) for each vertex such that the total vertex capacityis small and such that the graph G weighted by capacities effectively corresponds to a weightedvertex expander. We can then apply tools such as expander pruning directly to the low-diametergraph G . This allows the algorithm to avoid expander decomposition and instead focus on themuch simpler task of computing low-diameter subgraphs. We believe that this technique has thepotential to play a key role in designing other dynamic algorithms against an adaptive adversary. Breaking the Flow Decomposition Barrier for MWU.
We also briefly mention our techni-cal contribution for the min-cost flow algorithm of Theorem I.1.2. Plugging our new data structureinto the MWU framework is not by itself sufficient, because as discussed above, existing imple-ments of MWU necessarily encounter the flow decomposition barrier (see for example [Mad10]), asthey repeatedly send flow down an entire s - t path. We propose a new (randomized) scheme thatmaintains an estimator of the flow. While previous schemes have used estimators for the weights[CQ18, CHPQ20, CQT20], we are the first to directly maintain only an estimator of the solution ,i.e. of the flow itself. This poses various new problems to be considered: a more refined analysis ofMWU is needed, a new type of query operation for the decremental SSSP data structure is neces-sary, and the flow estimator we compute is only a pseudoflow. We succeed in tackling these issuesand provide a broad approach that might inspire more fast algorithms via the MWU framework. I.1.5 A Paper in Three Parts.
The article effectively contains three separate papers. Part II contains our decremental SSSP datastructure (Theorem I.1.1). We consider this part to be our main technical contribution; it is entirelyself-contained and can be read as its own paper on dynamic shortest paths. Part III shows how toextend the data structure from Part II to answer threshold sub-path queries, which are requiredfor our min-cost flow algorithm. Finally, Part IV contains our min-cost flow result (Theorem I.1.2);it is also entirely self-contained and can be read separately. In fact, Part III has zero overlap intechniques with the previous parts. The only reason we include it in the same paper is because ituses the data structure from Parts II/III as a black box.Before Part II, we include a detailed overview of techniques in the sections below.5 verview of Techniques
Most of the overview focuses on the dynamic SSSP algorithm itself (Part II), as we consider thisto be the main technical contribution. We give a short overview of the min-cost flow algorithm(Part IV) at the end. For ease of exposition, many of the definitions and lemmas in the overviewsections sweep technical details under the rug; we restate our entire result more formally in themain body of the paper.
I.2 Overview for Part II: Dynamic Shortest Paths
We now outline our framework for the dynamic algorithm of Theorem I.1.1. Our algorithm buildsupon many existing techniques: the MES-tree from [HKN14a], dynamic graph covers from [HKN16],the deterministic hopset construction from [GWN20], congestion balancing from [BPGS20], andothers. We first review some of the existing techniques we need. After that, the main goal of theoverview is to highlight the crucial building block that previous approaches were not able to solve,and to introduce our new techniques for solving it.
For simplicity, we assume throughout this entire section that the graph G is unweighted . Soevery update to G is just an edge deletion. The extension to graphs with positive weights involvesa few technical adjustments, but is conceptually the same. We also assume that all vertices in G have maximum degree 3; see Proposition II.1.2 in the main body for justification. I.2.1 Existing Techniques
Hop emulators.
A classic algorithm from 1981 by Even and Shiloach – denoted ES-tree – showsthat decremental SSSP is easy to solve if we only care about short distances. Although we assume forthis overview that the input graph G is unweighted, our algorithm will create new weighted graphs.A simple scaling technique extends the ES-tree to weighted graphs in the following way. Definedist ( h ) ( s, v ) to be the length of the shortest s − v that uses at most h edges. Then, ESTree ( G, s, h )maintains a (1 + (cid:15) )-approximation to dist ( h ) ( s, v ), for all v ∈ V , in total time ˜ O ( mh ) [Ber09].Note that ESTree ( G, s, h ) returns a (1+ (cid:15) )-approximation to dist( s, v ) if dist ( h ) ( s, v ) ∼ dist( s, v )– that is, if there exists a (1 + (cid:15) )-approximate shortest from s to v with at most h edges. A commontechnique in dynamic shortest paths is to construct a weighted graph H that has approximatelythe same shortest distances as G , but for which the above property holds for all pairs of vertices. Definition I.2.1.
Given a graph G = ( V, E ) , we say that graph H = ( V H , E H ) is a ( h, (cid:15) ) -emulatorof G if: V G ⊆ V H , for every pair ( u, v ) ∈ V G , dist G ( u, v ) ≤ dist H ( u, v ) , and for every pair ( u, v ) ∈ V G , there exists a path P H ( u, v ) ∈ H such that w ( P H ( u, v )) ≤ (1 + (cid:15) )dist G ( u, v ) and thenumber of edges on P H ( u, v ) is at most h . Observe that if H is a ( h, (cid:15) )-emulator of G then running ESTree ( H, s, h ) returns (1 + (cid:15) )-approximate distances in G . All efficient algorithms for sparse graphs, including ours, follow the6ame basic approach: maintain a ( h, (cid:15) )-emulator H , and then maintain ESTree ( H, s, h ). Observethat this returns (1 + (cid:15) ) -approximate distances in G . The time to run the ES-tree in H is ˜ O ( mh ).The harder step is maintaining the ( h, (cid:15) )-emulator H .There is a huge amount of work on maintaining hopsets in decremental graphs. If the adversaryis oblivious , Henzinger et al. [HKN14a] showed an essentially optimal algorithm: they maintaina ( n o (1) , (cid:15) )-emulator in total time b O ( m ). But as we discuss below, there is a crucial obstacle toobtaining such guarantees against an adaptive adversary. The state-of-the art adaptive algorithm byProbst Gutenberg and Wulff-Nilsen [GWN20] still suffers from polynomial overhead: they maintaina ( √ n, (cid:15) )-emulator in b O ( m √ n ) total update time. Layered construction of hop emulators.
The standard way of constructing a hop-emulatoris to add edge ( u, v ) of weight dist( u, v ) for some select pairs ( u, v ). The difficulty is that thisrequires knowing dist( u, v ), which is precisely the problem we are trying to solve. To overcomethis, many algorithms use a layered approach. Let γ be some parameter that is n o (1) but biggerthan polylog( n ). The idea of layering is to first use a regular ES-tree to maintain dist( u, v ) for somenearby pairs in G with dist( u, v ) ≤ γ . By adding the corresponding edges ( u, v ) to an emulator,one can then construct a (diam( G ) /γ, (cid:15) )-emulator H of G ; intuitively, shortest paths in H havefewer edges than those in G by a n o (1) factor. The next step is to construct an emulator H thatfurther compresses the number of edges on shortest paths. Observe that by construction of ouremulator, for any pair of vertices ( x, y ), dist ( γ ) H ( x, y ) ∼ dist G ( x, y ) as long as dist G ( x, y ) ≤ γ . Thus,running ESTree ( H , · , γ ) actually gives us distances up to γ in G ; these distances can then beused to construct a (diam( G ) /γ , (cid:15) )-emulator H of G (see Figure I.1). Continuing in this way,after q = log γ ( n ) = o (log( n )) iterations, the emulator H q will be a ( n o (1) , (cid:15) )-emulator, as desired.Figure I.1: The graph G (black edges) has initially large diameter. But H (black and blue edges)compresses the graph and reduces the number of edges on shortest paths by factor 2. Finally, H (black, blue and red edges) compresses the graph even further (also roughly by factor 2).Our algorithm follows the same layered approach. For ease of exposition, we focus this overviewon the goal below, which corresponds to constructing the first hop-emulator H of the layering; thecrucial obstacle to adaptive algorithms is already present in this simplified problem. Goal I.2.2 (Hop Compression) . Given a decremental graph G with large diameter and a paramemter γ = n o (1) , maintain a (diam( G ) /γ, (cid:15) )-emulator of G with b O ( m ) edges in total update time b O ( m ). I.2.2 Dynamic Hop-Emulator via Covering
We now describe the basic structure of the emulator H that we construct. Definition I.2.3 (Covering) . (highly idealized version of Definition II.2.6) Fix parameters d, D ,where d, D and D/d are all n o (1) . We say that an algorithm maintains a covering of a decrementalgraph G if it maintains cores C , . . . , C q , where each C i ⊂ V , with the following properties: . The algorithm can create new cores, but once a core C i is created it only shrinks over time.2. Each core C i has weak diameter diam G ( C i ) (cid:44) max x,y ∈ C i dist G ( x, y ) ≤ d . (Actually, differentcores have slightly different diameters, but we omit this complexity in the overview.)3. Each vertex v is near some C i ; formally, it is in ball( C i , d ) .4. Throughout the entire course of the algorithm, each vertex v belongs to only n o (1) different shell ( C i ) , where shell ( C i ) = ball( C i , D ) . This covering is similar to one used by the previous algorithms of [HKN14a, Che18, GWN20],with the crucial difference that those papers used single vertices c i instead of low-diameter cores C i . We will need our more general version for our new approach to maintaining such a covering. Emulator via Covering.
We outline why such a covering leads to the desired ( γ/n o (1) , (cid:15) )-emulator H , as outlined in Goal I.2.2. The algorithm maintains SSSP from each set C i up todiameter D using an ES-tree. This is done by simply adding a dummy source s i with edges toevery vertex in C i ; if a vertex w is removed from C i , the edge ( s i , w ) is deleted (vertices are neveradded to C i by Property 1). All these ES-trees can be maintained efficiently because by property4, the sum of | shell ( C i ) | = | ball( C i , D ) | is small. The algorithm then constructs emulator H as follows. In addition to the vertices of G , H contains a vertex v C i for each core C i . That is, V H = V ∪ { v C , . . . , v C q } . For every vertex v ∈ C i we add an edge ( v, v C i ) to E H of weight d .Finally, for every vertex w ∈ shell ( C i ) we add an edge ( w, v C i ) of weight dist( w, C i ). Note thatdist( w, C i ) and the corresponding edge-weight in H can change as vertices in G are deleted; if w leaves shell ( V i ) then the edge ( w, v C i ) is deleted from H . Analysis.
We now argue that hop-distances in H are compressed by a factor of about D/ n o (1) .We will show that for any vertices u, v ∈ V with dist G ( u, v ) = D/
2, there is an approximate shortestpath P H ( u, v ) in H with only two edges. If this property holds, then given any shortest path π G ( x, y )in G with ‘ edges, one can break π G ( x, y ) into 2 ‘/D segments of length D/ H using only 2 edges, leading to an x − y path in H with 4 ‘/D edges.To prove the above property for u, v with dist G ( u, v ) = D/
2, observe that Property 3 guaranteesthe existence of some v C i such that dist( u, C i ) ≤ d (cid:28) D . We thus have that dist( v, C i ) ≤ D/ d < D , so both ( u, v C i ) and ( v, v C i ) are edges in H . Now consider the 2-hop path P H ( u, v ) = u → v C i → v in H . We have that w ( P H ( u, v )) = dist( u, v C i ) + diam G ( C i ) + dist( v C i , w ) ≤ d + d + ( D/ d ) < (1 + (cid:15) ) D/
2, where the before-last inequality follows from Property 2.
I.2.3 The Crucial Building Block: Maintaining Low-Diameter Sets
The difficult part of maintaining a covering is maintaining the cores C , . . . , C q . After the algorithminitially computes some core C i , it might need to remove vertices from C i as G undergoes deletions,in order to maintain the property that C i has small diameter (Property 2). We can abstract thisgoal from the specifics of cores/shells and define the following crucial building block: Definition I.2.4 (Crucial Building Block; highly simplified version of Robust Core in DefinitionII.2.5) . Say that we are given a set K init ⊂ V ( G ) with weak diameter diam G ( K init ) ≤ d = n o (1) and that the graph G is subject to edge deletions. The goal is to maintain a set K ⊆ K init with thefollowing properties: • Decremental Property: the set K is decremental, i.e. it only shrinks over time. • Diameter Property: diam G ( K ) ≤ dn o (1) . Scattering Property:
For every vertex v ∈ K init \ K , | ball G ( v, d ) ∩ K init | ≤ (1 − δ scatter ) ·| K init | , where δ scatter = 1 /n o (1) . We refer to the above building block as the Robust Core problem. An algorithm for Robust Coreleads to a relatively straightforward algorithm for efficiently maintaining the covering in DefinitionI.2.3. Loosely speaking, when a new core C i is initially created it corresponds to K init , while thelarger graph G in robust core corresponds to shell ( C i ). The set K then corresponds to the core C i that is maintained as the graph undergoes edge deletions. The decremental property of RobustCore corresponds to Property 1 of Definition I.2.3. The diameter property corresponds to Property2. Finally, the scattering property ensures that every time a vertex leaves a core, its neighborhoodshrinks by a significant fraction; intuitively, such shrinking can only occur a small number of timesin total, so a vertex can only participate in a small number of cores (and hence a small number ofshells), which ensures Property 4.We now leave aside the details of cores and shells and focus on the abstraction of Robust Core. Previous Approaches to Robust Core (and their Limitations).
Although it is not typicallystated as such, the Robust Core problem distills the most basic version of a building block that issolved by almost all decremental SSSP algorithms for sparse graphs. This building block has alsoserved as the primary obstacle to progress on this problem. We briefly outline previous approaches.•
Non-Adaptive Adversaries: Random Source.
Robust Core is quite simple to solve witha randomized algorithm that assumes an oblivious adversary: pick a random source k ∈ K init and maintain ball( k, d ) ∩ K init using an ES-tree. The algorithm keeps this ES-tree as longas | ball( k, d ) ∩ K init | ≥ | K init | /
2. Note that this property ensures that if a vertex v leavesthe ES-tree, i.e. if dist( k, v ) becomes larger than 7 d , then ball( v, d ) ∩ K init is disjoint fromball( k, d ) ∩ K init , so v can removed from K according to the scattering property. Whenever | ball( k, d ) ∩ K init | becomes too small, the algorithm removes k from K and picks a differentrandom source. One can show that the algorithm only needs one single source in expectation,and O (log( n )) with high probability. Loosely speaking, the argument is that because source k is chosen at random from K , the fact that ball( k, d ) ∩ K init has become small implies that,in expectation, ball( k, d ) ∩ K init has become small for half the vertices v ∈ K , which in turnimplies that ball( v, d ) ∩ K init has become small for all vertices in K , so by the scatteringproperty, all vertices can be removed from K .Although the idea of picking a random source is very simple, it is also extremely powerfuland leads to a total update time of b O ( m ) for Robust Core. Unfortunately it has zero utilityagainst adaptive adversaries, because the randomness of the source is no longer independentfrom the sequence of updates, so the adversary can easily disconnect the source while leavingthe rest of the core intact. This one technique, along with a natural generalization to randomhitting sets, accounts for much of the gap between adaptive and oblivious algorithms for dynamicSSSP, as well as for related problems such as dynamic strongly connected components (see e.g.[BHS07, RZ08, RZ12, HKN14a, Ber16, CHI +
16, Che18, BPWN19, GW20, BGWN20]).•
Adaptive Adversaries: Many Sources.
The best-known adaptive algorithms for the buildingblock are much slower. Since one can no longer pick a random source, two recent algorithmsrun an ES-tree from every vertex in K [BC17, GWN20]. A trivial implementation leads to totalupdate time O ( mn ), but those papers use sophisticated density arguments to limit the size ofES-trees. These ideas lead to total update time b O ( m √ n ) [GWN20], but as noted in both papers, b O ( m √ n ) is hard barrier for this approach.• Adaptive Adversaries: Rooting at an Expander.
Some very recent work on related prob-lems [CK19, CS20, BPGS20] suggests that one can go beyond b O ( m √ n ) with expander tools. Say9hat the set K init is a φ -(vertex)-expander for φ = 1 /n o (1) . (See Definition I.2.5 in the subsectionbelow). Any φ -expander has small diameter. Because expanders are highly robust to deletions,the algorithm can efficiently maintain a large expander X ∈ K init using standard expanderpruning (see Theorem I.2.6 in subsection below). The algorithm then maintains ball( X, d )and removes from K any vertex that is not in this ball. Intuitively, the algorithm replaces arandom source with a deterministic expander, as both have the property of being robust todeletions.The issue is that even though K init has small diameter, it might not be an expander. Thenatural solution is to maintain a decomposition of the graph into expanders and handle eachexpander separately. Unfortunately, such a decomposition must necessarily allow for up to φn separator vertices that do not belong to any expander. If φ = 1 /n o (1) then the number ofseparator vertices is large, and it is unclear how to handle them efficiently. We suspect thatsetting φ to be a small polynomial, one could combine this expander approach with the densityarguments from [BC17, GWN20] mentioned above to achieve total update time O ( mn / − δ ).But because φ is a polynomial, such an approach could not lead to b O ( m ) total update time. I.2.4 Turning a Non-expander into an Expander
We now outline our approach to the crucial building block above. In a nutshell, we show the firstdynamic algorithm that uses expander tools while bypassing expander decomposition. As above,we assume that G has constant degree. Expander Preliminaries.
In this overview, expander always refer to a vertex expander. Ourexpansion factor will always be 1 /n o (1) . Definition I.2.5.
Consider an unweighted, undirected graph G and a set X ⊆ V . We say that ( L, S, R ) is a vertex cut with respect to X if L, S, R partition V ( G ) , | L ∩ X | ≤ | R ∩ X | , and thereare no edges between L and R . We say that ( L, S, R ) is a sparse vertex cut with respect to X if | S | ≤ | L ∩ X | /n o (1) . We say that X ⊆ V ( G ) forms an expander in G if there exists no sparse vertexcuts with respect to X . (Note that setting X = V ( G ) gives the standard definitions of sparse vertexcut and vertex expansion.) The key feature of expanders for our purposes is that they are robust to edge deletions. Inparticular, if X init initially forms an expander in G , then even after a large number of edge deletions,there is a guarantee to exist a large X ⊆ X init that forms an expander in G , and X can bemaintained efficiently. This is known as expander pruning. Theorem I.2.6 (Pruning in Vertex Expanders [SW19]) . Let G be a graph subject to edge deletions,and consider a set X init ⊆ V ( G ) such that X init initially forms an expander in G . There exists analgorithm Prune ( G, X init ) that can process up to | X init | /n o (1) edge deletions while maintaining adecremental set X ⊆ X init such that | X | ≥ | X init | / and X forms an expander in G . The totalrunning time is b O ( | X init | ) . Capacitated Expanders.
We argued above that Robust Core can be solved efficiently usingstandard tools if K init is initially an expander, because each edge deletion would have low impact.But the Robust Core problem only has the much weaker guarantee that K init has small (weak) The theorem from [SW19] is actually stated for edge expanders, and for technical reasons related to expanderembedding, we only prune edge expanders in the main body of the paper as well. But we effectively use it to prunevertex expanders, so for simplicity that is how we state it for the overview. K init = V ( G )and G consists of two expanders A, B with a single crossing edge ( u, v ). Note that G (and hence K init ) has small diameter but is far from being an expander. In particular, it is clear that u, v serveas bottlenecks, in that deleting the O (1) edges incident to u and v would immediately disconnectthe graph and cause the scattering property to hold for all vertices. By contrast, deleting all edgesincident to some random vertex z ∈ A would have low impact, because A is an expander.We thus see that in a non-expander, some vertices are much more critical than others. Quanti-tatively speaking, the vertices u and v are about n times more critical than a random vertex z ∈ A ,since their deletions would scatter n vertices. The O (1) neighbors of u and v are also highly critical,since deleting all of their incident edges would again scatter the graph. Criticality then drops offexponentially as we go further from u and v . See also Figure I.2.Figure I.2: Criticality in two graph examples (where red is very critical, yellow not critical). a) twoexpanders A, B joined by a single edge ( u, v ). The vertices u, v are extremely critical but parts ofthe graph further away are relatively uncritical. b) an expander graph. Here, no vertex is critical.Our key contribution is an algorithm that computes a criticality score κ ( v ) for each vertex suchthat the graph weighted by κ effectively corresponds to an expander. We now formalize this notion. Definition I.2.7.
Let G be a graph with vertex capacities κ , where κ ( v ) ≥ . For any X ⊆ V ( G ) ,we say that ( L, S, R ) forms a spare capacitated vertex cut with respect to X, κ if ( L, S, R ) is avertex cut with respect to X and P v ∈ S κ ( v ) ≤ | L ∩ X | /n o (1) . We say that X, κ forms a capacitatedexpander in G if there are no sparse capacitated vertex cuts with respect to X, κ . Note that any connected graph can be made into a capacitated vertex expander by setting κ ( v ) = n for all vertices in V . But we want to keep to total vertex capacity small becauseour algorithm will decrementally maintain a capacitated expander using pruning, and pruning oncapacitated expanders will incur update time proportional to capacities. Intuitively, the reasonfor this is that by definition of capacitated expander, to disconnect β vertices from the graph theadversary has to delete edges with P e κ ( e ) = b Ω( β ). Lemma I.2.8 (Capacitated Expander Pruning – implied by Lemma II.3.10) . Say that we are givena decremental graph G , a set X init ⊆ V ( G ) and a function κ such that ( X init , κ ) forms a capacitatedexpander in G . Then, there is an algorithm Prune ( G, X init , κ ) that can process any sequence ofedge deletions in G that satisfy P e κ ( e ) = O ( | X init | /n o (1) ) , while maintaining a decremental set X ⊂ X init such that | X | ≥ | X init | / and ( X, κ ) remains a capacitated expander in G . The totalrunning time is b O ( | X init | ) . To prove Lemma I.2.8, we do not need to modify standard pruning from Theorem I.2.6. Instead,11e are able to show that one can replace the capacitated expander by a regular uncapacitated one,on which we can then run standard pruning.
Ensuring Small Total Capacity.
Note that our capacitated pruning terminates after O ( | X init | /n o (1) )total edge capacity is deleted, at which point we need to reinitialize the pruning algorithm if wewant to keep maintaining an expander. Thus, to avoid doing many reinitializations, we want theaverage edge capacity to be small. Note that because we assume the main graph G has constantdegree, P e ∈ E ( G ) κ ( e ) ∼ P v ∈ V ( G ) κ ( v ). Our goal can thus be summarized as follows: given graph G and some core K , find a capacity function κ that turns K into a capacitated expander whileminimizing P v ∈ V ( G ) κ ( v ). One of the highlights of our paper is the following structural lemma,which shows that this minimum P v ∈ V ( G ) κ ( v ) is directly related to the (weak) diameter of K . Thislemma is implicitly proved in Section II.3 or Part II; for an explicit proof see Appendix A.1.5. Lemma I.2.9 (Small Capacity Sum for Small Diameter) . Given graph G and any K ⊆ V ( G ) ,there exists a capacity function κ ( v ) such that ( K, κ ) forms a capacitated vertex expander in G and P v ∈ V ( G ) κ ( v ) = ˜ O ( | K | diam G ( K )) , where diam G ( K ) (cid:44) max x,y ∈ K dist G ( x, y ) . This bound is tight:there exist G, K such that any feasible function κ necessarily has P v ∈ V ( G ) κ ( v ) = b Ω( | K | diam G ( K )) . Figure I.3: On a path, roughly half vertices are very critical. This is no coincidence: the pathgraph has large diameter.Unfortunately, we do not know how to compute the function κ guaranteed by Lemma I.2.9 innear-linear time. Instead, we compute a slightly relaxed version which only guarantees expansionfor relatively large cuts – i.e. cuts where L ∩ K is large with respect to K . We can show that thepruning of Lemma I.2.8 also works with this relaxed notion of capacitated vertex expansion. Lemma I.2.10 (Computing the Capacities) . Given graph G and any K ⊆ V ( G ) , one can computein b O ( n diam G ( K )) time a capacity function κ ( v ) such that P v ∈ V ( G ) κ ( v ) = b O ( | K | diam G ( K )) andsuch that there are no sparse capacitated vertex cuts ( L, S, R ) with respect to K for which | L ∩ K | ≥ K/n o (1) . I.2.5 Algorithm for Robust Core (Simplified Version of Algorithm 3 in Part II.)
We later sketch a proof for Lemma I.2.10. But first let us show how capacitated expanders can beused to solve Robust Core (Definition I.2.4); see pseudocode below.
Initialization of Robust Core.
First we apply Lemma I.2.10 to compute a capacity function κ such that ( K init , κ ) forms a capacitated expander in G . Recall that G has constant degree. SinceRobust Core assumes that diam G ( K init ) = b O (1), the running time of Lemma I.2.10 is b O ( n ) andwe have P e ∈ E ( G ) κ ( e ) = Θ( P v ∈ V ( G ) κ ( v )) = b O ( n ). Using capacitated expander pruning (LemmaI.2.8), we can maintain an expander X such that ( X, κ ) forms a capacitated expander in G . Wethen define our solution K to Robust Core as follows: initially K = K init , and we remove from K any vertex that leaves to be ball G ( X, d ). K clearly satisfies the decremental property of RobustCore. We can show that K satisfies the diameter property because K ⊆ ball( X, d ) and X itself12as low diameter because (loosely speaking) X forms a capacitated expander in G . Finally, aslong as we have that | X | ≥ | K init | /n o (1) , the core K satisfies the scattering property because v leaves K only if it leaves ball( X, d ), at which point ball( v, d ) ∩ X = ∅ . We have thus shown that K continues to be a valid solution to Robust Core as long as | X | is large. By capacitated expanderpruning (Lemma I.2.8), | X | will be sufficiently large as long as the total capacity of deleted edgessatisfies P e κ ( e ) = O ( n/n o (1) ). Algorithm 1:
RobustCore ( G, K init ) K ← K init n ← | K init | while | K | ≥ n − o (1) do // Each iteration is a single phase Compute κ as in Lemma I.2.10 such that ( K, κ ) is a capacitated expander in G . // Recall that the algorithm preservers the Monotonicity Invariant. while X ⊆ K from running Prune ( G, K, κ ) has size ≥ n − o (1) do Maintain ball( X, d ) using an ES-tree and remove every vertex leaving ball( X, d )from K . // | X | can become too small after adversary deletes b Ω( n ) edgecapacity. Once this happens, algorithm restarts the outer while loopwith the current K . Remove all vertices from K and terminate. // Only executed once | K | ≤ n − o (1) , soall remaining vertices satisfy scattering property. Maintaining Robust Core.
At some point, however, the capacity of deleted edges will be toolarge, and | X | may become too small. Consider the moment right before the deletion that causes | X | to become too small. At this moment, K is still a valid core, and hence has small (weak) diameter.Moreover, we can assume that | K | = Θ( K init ), since otherwise the entire core is scattered and wecan terminate Robust Core; formally, we are able to show that by the scattering property, if | K | becomes very small compared to | K init | we can simply remove every remaining vertex in K . Thealgorithm now essentially restarts the entire process above, but with K instead of K init . Thatis, it computes a new capacity function κ such ( K, κ ) forms a capacitated expander in G . Since K has small diameter, the running time is again b O ( n ) and we again have P v ∈ V ( G ) κ ( v ) = b O ( n ).The algorithm now uses capacitated pruning to maintain a new expander X ⊆ K and againremoves from K any vertex that leaves ball( X, d ). As before, K remains a valid core as long as | X | ≥ | K | /n o (1) ; here we use the fact that | K | = Θ( | K init | ) to ensure the scattering property. Thus,by the guarantees of pruning, K remains valid until the adversary deletes at least Ω( n/n o (1) ) moreedge capacity. Endgame of Robust Core.
Once the adversary deletes enough edge capacity, the algorithmagain computes a new function κ for the current K . We refer to each such recomputation of κ as anew phase . The algorithm continues executing phases until eventually | K | becomes much smallerthan K init ; as mentioned above, the algorithm can then remove all remaining vertices from | K | andterminate. Technically speaking, we show that not only does (
X, κ ) form an expander, but also that κ actually yields ashort-path embedding of an expander into X . nalysis of Algorithm 1. We argued above that each phase requires b O ( n ) time. The only stepleft is thus to show that the total number of phases is b O (1). To see this, assume for the moment thatalthough κ is recomputed between phases, every κ ( v ) is monotonically increasing. The argumentis now that since we always maintain a core K with small diameter, Lemma I.2.10 guarantees thatthe function κ we compute to make ( K, κ ) a capacitated expander in G always has P v ∈ K init κ ( v ) = b O ( n ). Since κ is monotonically increasing, this implies that the total vertex capacity over all phasesis b O ( n ), so the total edge capacity is also b O ( n ). But a phase can only terminate after at least n/n o (1) edge-capacity has been deleted, leading to at most b O ( n ) / ( n/n o (1) ) = b O (1) phases.To facilitate the above analysis, our algorithm will ensure that κ ( v ) is indeed monotonic. Notethat the algorithm only ever changes κ at the beginning of a new phase. Invariant I.2.11 (Monotonicity Invariant) . Let κ new be the new capacity function computed at thebeginning of some phase of Algorithm RobustCore and let κ old be the capacity function computedin the previous phase. Then, we always have κ old ( v ) ≤ κ new ( v ) ∀ v ∈ V . We now briefly outline our algorithm for computing κ in Lemma I.2.10. As we will see, themonotonicity invariant naturally follows from our approach and requires no extra work to ensure.Intuitively, since G is decremental, it only becomes further from a vertex expander over time, so itis not surprising that the vertices of G only become more critical. Lemma I.2.10 (Computing the Capacities) . Given graph G and any K ⊆ V ( G ) , one can computein b O ( n diam G ( K )) time a capacity function κ ( v ) such that P v ∈ V ( G ) κ ( v ) = b O ( | K | diam G ( K )) andsuch that there are no sparse capacitated vertex cuts ( L, S, R ) with respect to K for which | L ∩ K | ≥ K/n o (1) . Proof sketch of Lemma I.2.10
We follow the basic framework of congestion balancing intro-duced by the authors in [BPGS20]. But unlike in [BPGS20], we do not need to assume the initialgraph is an expander. Our overall framework thus ends up being significantly more powerful, bothconceptually and technically. See Section II.3.2 in the main body for details.Recall that we only maintain a relaxed expansion that applies to balanced cuts (
L, S, R ) forwhich | L ∩ K | ≥ | K | /n o (1) . The high-level idea of congestion balancing is quite intuitive. Weinitially set the the κ ( v ) to be small enough that P v ∈ V κ ( v ) = O ( | K | ). Then, we repeatedly findan arbitrary balanced cut ( L, S, R ) such that P v ∈ S κ ( v ) < | L ∩ K | /n o (1) . If no such cut exists, then( K, κ ) forms a capacitated expander in G , as desired. Else, increase the expansion of this cut bydoubling all vertex capacities in S .Note that this approach naturally satisfies the Monotonicity Invariant. When the algorithmneeds to compute a new κ new for a new set K , it starts the process with the old capacities κ old ( v ).If there are no sparse balanced cuts using κ old ( v ), the algorithm can just set κ new = κ old . Otherwisethe algorithm only changes capacities by doubling them, so κ new ≥ κ old .The crux of the analysis is showing that such a doubling step can only occur b O (diam G ( K ))times. This allows us to bound the total running time; it also guarantees that we always have P v ∈ V ( G ) κ ( v ) = b O ( | K | diam G ( K )), because it is easy to check that each doubling step increases thetotal capacity of S by at most | L ∩ K | < | K | . To bound the number of doubling steps, we use apotential function Π( G ) that corresponds to (loosely speaking) the min-cost embedding in G , wherethe cost of a vertex v is log( κ ( v )). We are able to show that each doubling step increases Π( G ) by b Ω( | K | ), and that Π( G ) is always b O ( | K | diam G ( K )), thus giving the desired bound of b O (diam G ( K ))doubling steps. 14 .2.6 A Hierarchy of Emulators We have outlined above how to solve the crucial building block Robust Core, which can in turn beused to maintain a covering of G (Definition I.2.3), which allows us to achieve Goal I.2.2 – that is, tocompress hop distances by a n o (1) factor. But decremental SSSP can only be solved efficiently whenall hop distances are small, so we need to apply this compression multiple times. In particular,we have a hierarchy of emulators, where H compresses hop distances in G , H compresses hopdistances in H , and so on.This layering introduces several new challenges. The biggest one is that all the tools aboveassume a decremental graph, and even though G is indeed decremental, the graphs H i may haveboth edge and vertex insertions. For example, the vertex set of H also includes core vertices foreach core in the covering of G , and when some core C in G becomes scattered, new cores are addedto cover the vertices previously in C , so new core vertices and edges are added to H . Fortunately,these insertions have low impact on distances in H because H is emulating a decremental graph G . We thus refer to H as being decremental with low-impact insertions . Since the algorithm formaintaining H sees H as its underlying graph, all of our tools must be extended to work in thissetting.The fact of emulators having low-impact insertions is a common problem in previous dynamicalgorithms as well. While there exist algorithms that are able to extend the ES tree to workin such a setting (see especially [HKN14a]), extending Robust Core and congestion balancing issignificantly more challenging. Conceptually speaking, the main challenge lies with the scatteringproperty: if G has insertions, then ball( v, d ) can both shrink and grow, so a vertex can alternatebetween being scattered and unscattered.One of our key technical contributions is a more general framework for analyzing congestionbalancing that naturally extends to graphs with low-impact insertions. At a high-level, congestionbalancing from [BPGS20] defined a potential function Π( G ) on the input graph G (see LemmaI.2.10). The issue is that if G has insertions, then Π( G ) can actually decrease, which invalidatesthe analysis. To resolve this, we show that there exists a graph b G which is entirely decremental andyet has exactly the same vertex-cuts as G . We then show that the analysis of congestion balancinggoes through if we instead look at Π( b G ). We note that the algorithm never has to construct b G ;it is used purely for analysis. The formal analysis is highly non-trivial and we refer the reader toSection II.3.2 for more details. Returning the Path.
The hierarchy of emulators also creates unique difficulties in path-reporting(Part III). We discuss this more at the end of the overview section, after we introduce the threshold-subpath queries that we need in our minimum cost-flow algorithm.
I.3 Overview of Part IV: Static Min-Cost Flow
We now outline our flow algorithm for Theorem I.1.2. The techniques in Part IV have zero overlapwith those from Parts II and III: the only relation is that Part IV uses the dynamic SSSP datastructure from Parts II and III as a black box.
Simplifying Assumptions.
For ease of exposition, this overview section focuses on the problemof vertex capacitated max flow, and ignores costs entirely. We note that no almost-linear timealgorithm is known even for this simpler problem. The extension to costs follows quite easily.15 otation.
Let G = ( V, E, u ) be the input graph, where u ( x ) is the capacity of vertex x . Let s be a fixed source and t be a fixed sink. For any path P , define λ ( P ) to be the minimum vertexcapacity on P . The goal is to compute a flow vector f ∈ R E + that satisfies standard flow constraints: ∀ x / ∈ { s, t } , in f ( x ) = out f ( x ) (flow conservation) and in f ( x ) ≤ u ( x ) (feasibility). We define thevalue of f to the total flow leaving s . Our goal is compute a (1 − (cid:15) )-optimal flow. I.3.1 Existing Technique: Multiplicative Weight Updates
We follow the framework of Garg and Koenneman for applying MWU to maximum flow [GK07].We assume for simplicity that the approximation parameter (cid:15) is a constant. Loosely speaking, theframework is as follows:
Algorithm 2:
MWU ( G, s, t ) Initialize the flow: f ← Create weight-function w : V → R + ; with initial vertex weights around 1 / poly( m ). while true do Compute a (1 + (cid:15) )-shortest path π ( s, t ) with respect to the edge weights w . if w ( π ( s, t )) > then exit while-loop. λ ← λ ( π ( s, t )). foreach edge e ∈ π ( s, t ) do f ( e ) ← f ( e ) + λ . /* For every u ( v ) units of flow entering v , the weight w ( v ) is increasedby a e (cid:15) ≈ (1 + (cid:15) ) factor */ foreach vertex v ∈ π ( s, t ) do w ( v ) ← w ( v ) · exp( (cid:15)λ/u ( v )). return f scaled by factor Θ(log( n )) .At a very high-level, the algorithm increases the weights of vertices that receive a lot of flowrelative to their capacity, so that the next shortest path is less likely to use that vertex. Using aprimal-dual analysis (see e.g. [GK07]), one can show that the returned flow is feasible and (1 − (cid:15) )approximate.Following the framework by Madry [Mad10], Chuzhoy and Khanna [CK19] used a dynamicSSSP data structure to avoid recomputing a new shortest s - t path π ( s, t ) from scratch with eachiteration of the while loop. (A dynamic SSSP structure for edge-weighted graphs can easily beconverted into one for vertex-weighted ones.) Because vertex weights only increase, a decrementalSSSP data structure suffices. Note also that the MWU framework requires the data structure towork against an adaptive adversary, because the updates to the data structure (the weight increases)depend on the (1 + (cid:15) )-shortest path returned by the data structure. The Flow Decomposition Barrier.
In addition to computing the paths π ( s, t ), the MWUframework also adjusts every vertex/edge on the path. Thus, if P is the set of all s - t pathsreturned by algorithm, then the total running of MWU is: [total update time of decremental SSSP]+ [ P P ∈P | P | ]. Previous work bounds the second quantity in the following way. Say that we haveweighted vertex capacities. On the one hand, each vertex v receives at most O ( u ( v ) log n ) flow intotal, since the flow f / Θ(log( n )) returned in step 10 is guaranteed to be feasible. On the otherhand, each path π ( s, t ) sends at most λ ( π ( s, t )) = min v ∈ π ( s,t ) u ( v ) flow which might only "fill-up"the minimizer vertex. There might thus be O ( n log( n )) paths in total, each of length at most n ,16o P P ∈P | P | = O ( n log( n )). An example where this behavior is apparent is given in Figure I.4below.Figure I.4: Example of a graph with vertex capacities where running the standard MWU algorithmresults in P P ∈P | P | = Θ( n log( n )).In the above figure, each path π s,t has λ ( π ( s, t )) = 1, so the algorithm only sends one unit offlow at a time. It is not hard to check that each of the red v i will be used O (log n ) times, for a totalof n log( n ) paths; each path has length n , so P P ∈P | P | = Θ( n log( n )). One can similarly showthat in edge-capacitated graphs, there are examples with P P ∈P | P | = Ω( mn log( n )). For unit edgecapacities, P P ∈P | P | is at most O ( m log( n )). Up to the extra log( n ) factor, These bounds preciselycorrespond to what is known as the flow-decomposition barrier for maximum flow [GR98].The previous state-of-the-art for adaptive decremental SSSP has total update time ˜ O ( n )[CK19, CS20, BBG + O ( n ) algorithm forapproximate min-cost flow for graphs with unit edge capacities or vertex-capacitated graphs. Butthese results did not lead to any improvement for edge-capacitated graphs precisely because of theflow-decomposition barrier. Similarly, our new data structure immediately yields an b O ( m )-timemin-cost flow algorithm for unit-capacity graphs (itself a new result), but on its own cannot makeprogress in graphs with general vertex or edge capacities.To get b O ( m ) time for general capacities, we need to modify the MWU framework. Ours is thefirst MWU-based algorithm for max flow to go beyond the flow-decomposition barrier. I.3.2 Our New Approach: Beyond the Flow-Decomposition Barrier
The basic idea of our approach is to design a new MWU-framework with the following property
Invariant I.3.1.
In our MWU framework, whenever the algorithm sends flow from x to y on edge ( x, y ) , it sends at least ˆΩ( u ( y )) flow. Combined with the fact that the final flow through any vertex v is at most O ( u ( y ) log( n )), andthe fact that MWU never cancels flow (because it does not deal with a residual graph), it is easyto see that Invariant I.3.1 guarantees that the total number of times the algorithm sends flow intoany particular vertex y is b O (1), so P P ∈P | P | = b O ( n ). Achieving this invariant requires makingchanges to the MWU-framework. 17 seudoflow. Consider Figure I.4 again. Consider some path π ( s, t ) chosen by the MWU algo-rithm. This path has λ ( π ( s, t )) = 1. The algorithm can send one of flow into some red v i , but inorder to preserve the invariant above, it cannot send 1 unit of flow down ( v i , y ). As a result, theflow we maintain is only a pseudoflow : it is capacity-feasible, but does not obey flow conservationconstraints. We will show, however, that we can couple the computed pseudo-flow to a near-optimalflow. Definition I.3.2 (pseudo-optimal flow: simplified version of Definition IV.2.1) . We say that apseudoflow ˆ f is (1 − (cid:15) ) -pseudo-optimal if there exists a valid flow f such that • f is a (1 − (cid:15) ) -optimal flow. • for every v ∈ V , | in f ( v ) − in ˆ f ( v ) | ≤ (cid:15)u ( v ) . We later show that there exists a black box reduction from computing a (1 − (cid:15) )-optimal flowto computing a (1 − (cid:15) )-pseudo-optimal flow. But first, we focus this overview on computing a(1 − (cid:15) )-pseudo-optimal flow. The Ideal Flow and the Estimated Flow.
At each step, the algorithm will implicitly computea (1 + (cid:15) )-approximate shortest path π ( s, t ), but to preserve Invariant I.3.1, it will only add flow onsome edges of π ( s, t ). We denote the resulting pseudoflow ˆ f . To show that ˆ f is (1 − (cid:15) )-pseudo-optimal, we will compare it to the ideal flow f , which sends λ ( π ( s, t )) flow on every edge in π ( s, t ),as in the standard MWU framework. Our approach thus needs to ensure that ˆ f is always similarto f . Randomized Flow.
Consider Figure I.4 again. Say that MWU computes a long path sequence P , P , . . . . For the first path P , the algorithm can simply increase ˆ f ( s, v i ) and not send any flowon the other edges; we will still have | in f ( y ) − in ˆ f ( y ) | = 1 − (cid:28) (cid:15)u ( y ), and the same will holdfor the vertices after y . But as more and more paths are processed, in f ( y ) will increase, so thealgorithm must eventually send flow on ˆ f through y . The natural solution is to send u ( y ) = n flowon one of the edges ( v j , y ) after u ( y ) paths P i go through y , so that in f ( y ) = in ˆ f ( y ) = u ( y ). (Vertex v j will then have much more than u ( v j ) = 1 flow leaving it, but this is allowed by Definition I.3.2,which only constrains inflow.) The problem is that in a more general graph there is no way to tellwhich paths π ( s, t ) go through y , since the algorithm avoids looking at the paths explicitly.To resolve this issue, we introduce randomization. For every implicit flow path π ( s, t ), ˆ f alwayssends flow u ( x ) into every vertex x on π ( s, t ) with capacity u ( x ) = λ ( π ( s, t )), but also with proba-bility 1 / u ( x ) flow into every x with u ( x ) ≤ λ ( π ( s, t )), with probability 1 / u ( x )flow into every x with u ( x ) ≤ λ ( π ( s, t )), and so on. (In reality, we use an exponential distributionrather than a geometric one, and we scale all flow down by n o (1) to ensure concentration bounds.)It is not hard to see that the expected flow ˆ f ( v, x ) into x is λ ( π ( s, t )) = f ( v, x ). Changes to the MWU-framework.
Our algorithm thus makes the following changes to theMWU-framework above. Each iteration (implicitly) computes a (1 − (cid:15) )-approximate shortest asbefore, but instead of sending flow on every edge, the algorithm first picks a parameter γ fromthe exponential distribution, and then in ˆ f it sends u ( y ) flow through every edge ( x, y ) for which u ( y ) ≤ γλ ( π ( s, t )). The algorithm uses weight function ˆ w , which follows the same multiplicativeupdate procedure as before, except it depends on ˆ f rather than f . (The shortest path π ( s, t ) ineach iteration is computed with respect to ˆ w .)The main difficulty in the analysis is that even though ˆ f tracks f in expectation, f actuallydepends on earlier random choices in ˆ f , because ˆ f determines the vertex weights ˆ w , which in turn18ffect the next (1 − (cid:15) )-approximate path π ( s, t ) used in f . We are able to use concentration boundsfor martingales to show that ˆ f ∼ f with high probability. We are also able to show that eventhough the flow f is no longer in perfect sync with the weight function ˆ w , the chosen paths π ( s, t )are still good enough, and the final flow f is (1 − (cid:15) )-optimal, so ˆ f is (1 − (cid:15) )-pseudo-optimal. Finally,as mentioned above, we show a black-box conversion from computing a (1 − (cid:15) )-pseudo-optimal flowto computing a regular (1 − (cid:15) )-flow.For our modified algorithm to run efficiently, we need to be able to return all edges ( x, y ) on π ( s, t ) for which u ( y ) ≤ γ · λ ( π ( s, t )), in time proportional to the number of such edges. We are ableto extend our data structure from Part II to answer such queries (see below); the MWU algorithmthen uses this data structure as a black box.(1 − (cid:15) ) -Optimal Flow from (1 − (cid:15) ) -Pseudo-Optimal Flow. Re-inspecting Definition I.3.2, weobserve that for vertices where in f ( v ) ∼ u ( v ), the second property | in f ( v ) − in ˆ f ( v ) | ≤ (cid:15)u ( v ) impliesthat we have a (1+ (cid:15) )-multiplicative approximation of the amount of in-flow for v . Unfortunately, thein-flow of v might be significantly lower than u ( v ). But if in ˆ f ( v ) (cid:28) u ( v ), the same property impliesthat in f ( v ) (cid:28) u ( v ), so most of the capacity of v is not required for producing a (1 − (cid:15) )-optimalflow. We therefore suggest a technique that we call capacity-fitting, where we repeatedly use ouralgorithm for pseudo-optimal flow to reduce the total vertex capacities by a factor of roughly 2. Weterminate with a pseudo-flow that has (loosely speaking) the following property: for each vertex v ,either in ˆ f ( v ) ∼ u ( v ) or the capacity of v is negligible. Once this property is achieved, we can routethe surplus flow in the pseudo-flow by scaling the graph appropriately and then computing a singleinstance of regular maximum flow (only edge capacities, no costs) using the algorithm of [She17a]. Comparison to Previous Work.
There have been several recent papers that avoid updatingevery weight within the MWU framework by using a randomized threshold to maintain an estimatorinstead [CQ18, CHPQ20, CQT20]. The main difference of our algorithm is that to overcome theflow-decomposition barrier, we need to maintain an estimator not just of the weights but of thesolution (i.e. the flow) itself. This introduces several new challenges: we need a modified analysisof the MWU framework that allows us to compare the estimated flow ˆ f with the ideal flow f ; ourMWU algorithm only computes a pseudoflow ˆ f , which then needs to be converted into a real flow;and in order to update ˆ f efficiently, we need to introduce the notion of threshold-subpath queriesand show that our new decremental SSSP data structure can answer them efficiently. I.4 Overview of Part III: Threshold-Subpath Queries
In order to use it in the min-cost flow algorithm of Part IV, we need our SSSP data structure tohandle the following augmented path queries.
Definition I.4.1 (Informal Version of Definition III.0.1) . Consider a decremental weighted graph G where each edge ( u, v ) has a fixed steadiness σ ( u, v ) ∈ { , , . . . , τ } , with τ = o (log( n )) . Notethat while weights in G can increase over time, the σ ( u, v ) never change. For any path π , let σ ≤ j ( π ) = { ( u, v ) ∈ π | σ ( u, v ) ≤ j } . We say that a decremental SSSP data structure can answer threshold-subpath queries if the following holds: • At all times, every vertex v corresponds to some (1 + (cid:15) ) -approximate s - v path π ( s, v ) ; we saythat the data structure implicitly maintains π ( s, v ) . Given any query( v, j ), the data structure can return σ ≤ j ( π ( s, v )) in time | σ ≤ j ( π ( s, v )) | ; cru-cially, the path π ( s, v ) must be the same regardless of which j is queried. (Note that query( v, τ )corresponds to a standard path query.) We briefly outline how threshold-subpath queries are used by our min-cost flow algorithm.Recall that in our modified framework, each iteration of MWU implicitly computes a (1 + (cid:15) )-approximate shortest path π ( s, t ), but instead of modifying all the edges on π ( s, t ), it picks arandom threshold γ and only looks at edges ( x, v ) on π ( s, t ) for which u ( v ) ≤ γλ ( π ( s, t )). Wethus want a data structure that returns all such low-capacity edges in time proportional to theirnumber. This is exactly what a threshold-subpath query achieves. Here, π ( s, t ) corresponds to thepath implicitly maintained by the data structure. Every edge steadiness σ ( x, v ) is a function of u ( v ), and thus remains fixed throughout the MWU algorithm. Loosely speaking, for some η = n o (1) ,if u ( v ) ∈ [1 , η ) then σ ( x, v ) = 1, if u ( v ) ∈ [ η, η ) then σ ( x, v ) = 2, and so on. (The actual function isa bit more complicated and σ ( x, v ) can also depend on the cost of vertex v , not just the capacity.)Since the buckets increase geometrically, the number of possible steadiness level τ will be small.Note that because each steadiness captures a range of capacities, when we use the data structure inour MWU algorithm, we only achieve the slightly weaker guarantee that we return edges ( x, v ) on π ( s, t ) for which u ( v ) (cid:46) γλ ( π ( s, t )); this weaker guarantee works essentially as well for our analysis.We show in Part III that our SSSP data structure from Part II can be extended to handlethreshold-subpath queries, while still having b O ( m ) total update time. We briefly outline our tech-niques below. Techniques.
Threshold-subpath queries introduce several significant challenges. Recall that thealgorithm iteratively computes emulators G = H , H , . . . H q , where each edge of H i correspondsto a short path in H i − , and the final emulator H q is guaranteed to have small hop distances. Thealgorithm can then estimate the s - v distance by computing the shortest path in H q . It is not toohard to “unfold” the path in H q into a path π ( s, v ) in the graph G by successively moving down theemulators. But to answer augmented path queries efficiently, we need to avoid unfolding emulatoredges for which the corresponding path in G does not contain any low-steadiness edges. We thusneed a way of determining, for every emulator edge, the minimum steadiness in its unfolded pathin G ; we refer to this as the steadiness of the emulator edge.The issue is that if each edge in H i corresponds to an arbitrary (1 − (cid:15) )-approximate path in H i − , then the steadiness of emulator edges will be extremely unstable, and impossible to maintainefficiently. We overcome this problem by carefully defining, for each emulator edge in H i , a specific critical path in H i − corresponding to ( x, y ), which ensures that the steadiness of ( x, y ) is robust,and allows us to maintain the entire hierarchy efficiently.A second challenge is that any edge ( u, v ) ∈ E may participate in many emulator edges, withthe result that when we unfold the emulator edges, the resulting path in G might not be simple– i.e. it might contain many copies of an edge ( u, v ). Through a careful analysis of our emulatorhierarchy, we are able to show that any path achieved via unfolding is close-to-simple, in that every( u, v ) ∈ E appears at most n o (1) times. We then show that MWU can be extended to handle suchclose-to-simple paths. See Part III for details. 20 art II: Distance-only Dynamic ShortestPaths In this part, we give the proof for our main result: a deterministic decremental SSSP data structurein almost-linear time.
Theorem I.1.1 (Decremental SSSP) . Given an undirected, decremental graph G = ( V, E, w ) , afixed source vertex s ∈ V , and any (cid:15) > / polylog( n ) , we give a deterministic data structure thatmaintains a (1 + (cid:15) ) -approximation of the distance from s to every vertex t in V explicitly in totalupdate time m o (1) polylog W . The data structure can further answers queries for an (1 + (cid:15) ) -approximate shortest s -to- t path π ( s, t ) in time | π ( s, t ) | n o (1) . Remark:
In Part II, we focus exclusively on answering approximate distance queries. Extendingthe data structure to return an approximate shortest path in time | π ( s, t ) | n o (1) is not too difficultbut requires some additional work. We do not spell out the details because these path queries area special case of the more powerful (and much more involved) augmented path queries detailed inPart III.We start by providing the necessary preliminaries for the part and then provide a brief overviewintroducing the main components used in our proof and give a road map for the rest of the part. II.1 Preliminaries
Graphs.
We let a graph H refer to a weighted, undirected graph with vertex set denoted by V ( H ) of size n H , edge set E ( H ) of size m H and weight function w H : E ( H ) → R > . We define theaspect ratio W of a graph to be the ratio of the largest to the smallest edge-weight in the graph.We say that H is a dynamic graph if it is undergoing a sequence of edge deletions and insertionsand edge weight changes (also referred to as updates), and refer to version t of H , or H at stage t as the graph H obtained after the first t updates have been applied. We say that a dynamic graph H is decremental if the update sequence consists only edge deletions and edge weight increases. Fora dynamic graph H , we let m H refer to the total number of edges in H in all updates (we assumethat the update sequence is finite).In this article, we denote the (decremental) input graph by G = ( V, E, w ) with n = | V | and m = | E | . In all subsequent definitions, we often use a subscript to indicate which graph we referto, however, when we refer to G , we often omit the subscript. Basic Graph Properties.
For any graph H , and any vertex v ∈ V ( H ), we let E ( v ) denote theset of edges incident to v . For any set S ⊆ V ( H ), we let E ( S ) = S v ∈ V E ( v ). Finally, for any twodisjoint sets A, B we let E ( A, B ) denote all edges with one endpoint in A , the other in B .21e let deg H ( v ) denote the degree of v , i.e. the number of edges incident to v . If the graphis weighted, we let vol H ( v ) denote the weighted degree or volume of vertex v , i.e. vol H ( v ) (cid:44) P e ∈ E ( v ) w H ( e ). For S ⊆ V ( H ), we also use deg H ( S ) (vol H ( S )) to denote the sum over the degrees(volume) of all vertices in S . If H is dynamic, we define the all-time degree of v to be the totalnumber of edges that are ever incident to v over the entire update sequence of H . (An edge ( u, v )that is inserted, deleted and inserted again, contributes twice to the all-time degree of v ). Functions.
Say that we have a function f : D → R for some domain D . Given any S ⊆ D weoften use the following short-hand: f ( S ) (cid:44) P x ∈ S f ( x ). For example, the definitions of vol H ( S )and deg H ( S ) above follow this short-hand, and w H ( E ( A, B )) denotes the sum of edge-weights in E ( A, B ). Expanders.
Let H be a graph with positive real weights w H . Let 0 < φ < H is a φ -expander if for every S ⊂ V ( H ) we have that w H ( E ( S, V ( H ) \ S )) ≥ φ min { vol H ( S ) , vol H ( V \ S ) } . Distances and Balls.
We let dist H ( u, v ) denote the distance from vertex u to vertex v in agraph H and denote by π u,v,H the corresponding shortest path (we assume uniqueness by implicitlyreferring to the lexicographically shortest path). We also define distances more generally for setsof vertices, where for any sets X, Y ⊆ V ( H ), we denote by dist H ( X, Y ) = min u ∈ X,v ∈ Y dist H ( u, v )(whenever X or Y are singleton sets, we sometimes abuse notation and simply input the elementof X or Y instead of using set notation).We define the ball of radius d around a vertex v as ball H ( v, d ) = { w | dist H ( v, w ) ≤ d } andthe ball of radius d around a set X ⊂ V as ball H ( X, d ) = { w | dist H ( X, w ) ≤ d } . We say that aset X w.r.t. a decremental graph H is a decremental set if at each stage of H , X forms a subsetof its previous versions. If H is decremental, then for any X ⊆ V , we have that ball H ( X, d ) is adecremental set, since distances can only increase over time in a decremental graph.Finally, given any graph H and a set X ⊂ V ( H ), we define weak diameter diam H ( X ) (cid:44) max u,v ∈ X dist H ( u, v ). Hypergraphs.
In this part, we also use the generalization of graphs to hypergraphs (but we willpoint out explicitly whenever we use a hypergraph). Let H = ( V, E ) be a hypergraph, i.e. elements e in E , called hyperedges , are now sets of vertices, i.e. e ⊆ V (possibly of size larger than two). Wesay that two vertices u, v ∈ V are adjacent if there is a hyperedge e ∈ E containing both u and v . If v ∈ e , then v is incident to e . For any vertex set S ⊆ V , the subhypergraph H [ S ] induced by S (or the restriction of H to S ) is such that V ( H [ S ]) = S and E ( H [ S ]) = { e ∩ S | e ∈ E } . Thatis, each edge of H [ S ] is an edge from H restricted to S . The total edge size of H is denoted by | H | = P e ∈ E | e | .Let ( L, S, R ) be a partition of V where L, R = ∅ . We say that ( L, S, R ) if a vertex cut of H if,for every u ∈ L and v ∈ R , u and v are not adjacent in H . Let κ : V → R ≥ be vertex capacitiesof vertices in H . The size of the cut ( L, S, R ) is κ ( S ) = P u ∈ S κ ( u ).The incidence graph of H denoted by H bip = ( V ∪ E, E bip ) is a bipartite graph where E bip = { ( v, e ) ∈ V × E | v ∈ e } . This bipartite view will be especially useful for implementing flowalgorithms on hypergraphs. Note that | E bip | = | H | .We say that a sequence of vertices v , . . . , v k form a path in H if each pair of vertices v i , v i +1 are adjacent in H . We define the length of path v , . . . , v k to be k − u, v in H we define dist( u, v ) to be the length of the shortest u − v path in H , with dist( u, v ) = ∞ if there22s no u - v path in H . Given any vertex set K ⊆ V ( H ), we say that diam H ( K ) ≤ d if for every pairof vertices u, v ∈ K we have that dist( u, v ) ≤ d . Dynamic Hypergraphs.
We subsequently deal with a dynamic hypergraph H . We modelupdates by edge deletions/insertions to the incidence graph H bip . This corresponds to increas-ing/decreasing the size of some hyperedge e in H , or adding/removing a hyperedge in H entirely.One subtle detail that we use implicitly henceforth is that when we shrink or increase a hyperedge e then this does not result in a new version e but rather refers to the same edge at a differenttime step. This is important when we consider the all-time degree which is the total number ofhyperedges that a vertex v is ever contained in. Embedding.
In this article, we view an embedding P in a hypergraph H as a collection of pathsin its corresponding bipartite graph representation H bip . For any v ∈ V , we let P v be the set ofpaths in P that contain the vertex v . With each path P ∈ P , we associate a value val( P ) >
0. Wethen say that the embedding P has vertex congestion with respect to vertex capacities κ at most c if for every vertex v ∈ H , P P ∈P v val( P ) ≤ c · κ ( v ). We say that the embedding P has length len ifevery path P ∈ P consists of at most len edges. Further, we associate with each embedding P into H , a weighted (multi-)graph W taken over the same vertex set V ( H ) and with an edge ( u, v ) ofweight w ( u, v ) = val( P ) for each u - v path P in P . We say that P embeds W into H and say that W is the embedded graph or the witness corresponding to P . Rounding Shorthand.
For any number n and k , let d n e k = d n/k e· k denote the integer obtainedby rounding n up to the nearest multiple of k . Parameters.
Throughout the part we refer to three global parameters: φ cmg = 1 / Θ(log / n ) = n o (1) , (cid:15) wit = φ cmg / log ( n ) and δ scatter = C(cid:15) wit for a large enough constant C . φ cmg is first used inTheorem A.2.5, (cid:15) wit in Lemma II.3.5 and δ scatter in Definition II.2.5. A Formal Definition of a Decremental SSSP Data Structure.
In order to avoid restatingthe guarantees of a Decremental SSSP data structure throughout the part multiple times, we givethe following formal definition.
Definition II.1.1 ( SSSP ) . A decremental
SSSP data structure
SSSP ( G, s, (cid:15) ) is given a decre-mental graph G = ( V, E ) , a fixed source vertex s ∈ V , and an accuracy parameter (cid:15) ≥ . Then, itexplicitly maintains distance estimates e d ( v ) for all vertices v ∈ V such that dist G ( s, v ) ≤ e d ( v ) ≤ (1 + (cid:15) )dist G ( s, v ) . Simplifying reduction.
We will use the following simplifying reduction which allows us to as-sume that out input graph G throughout this part has bounded degree and satisfies other convenientproperties. We give a proof of the proposition below in Appendix A.3.1. Proposition II.1.2.
Suppose that there is a data structure
SSSP ( H, s, (cid:15) ) that only works if H satisfies the following properties: • H always stays connected. • Each update to H is an edge deletion (not an increase in edge weight). • H has maximum degree . • H has edge weights in [1 , n H ] . uppose SSSP ( H, s, (cid:15) ) has T SSSP ( m H , n H , (cid:15) ) total update time where m H and n H are numbers ofinitial edges and vertices of H . Then, we can implement SSSP ( G, s, O ( (cid:15) )) where G is an arbi-trary decremental graph with m initial edges that have weights in [1 , W ] using total update time of ˜ O (cid:0) m/(cid:15) + T SSSP ( O ( m log W ) , m, (cid:15) ) (cid:1) · log( W ) . II.2 Main Components
In this section, we introduce the main components of our data structure. Although the part isself-contained, this section will be considerably more intuitive if the reader is familiar with theoverview section I.2.As pointed out in Section I.2.1, our data structure constructs a layering where each layer aimsat compressing the graph G further in order to compute the approximate SSSP distances up to acertain distance threshold. To make this notion of approximate SSSP up to a threshold precise,we introduce Approximate Balls in Section II.2.1. Next, we define the main building block: aRobust Core data structure that maintains a low-diameter vertex set with large approximation inSection II.2.2.With these two ingredients in place, we can introduce our most involved concept, a decrementalgraph Covering, formally in Section II.2.3. This concept forms the core of our data structure.Figure II.1: Relations between main components.Finally, we show how to use the Cover-ing as described in Section I.2.1 to compressthe graph G . We make the act of compres-sion formal by introducing the concepts ofCovering-Compressed Graphs and CompressedGraphs in Section II.2.4. The reason we re-quire these notions is to give a formal interfacefor the next higher level where Robust Coresand Approximate Balls are maintained on theCovering-Compressed/ Compressed Graph tofurther compress G . Still, we associate each(Covering-)Compressed Graph with the levelwhere its underlying Covering is maintained.We summarize the relations between thesecomponents in Figure II.1. II.2.1 Approximate Ball
Definition II.2.1. An approximate ball datastructure ApxBall ( G, S, d, (cid:15) ) is given a decre-mental graph G = ( V, E ) , a decremental sourceset S ⊆ V , a distance bound d > , and an ac-curacy parameter (cid:15) ≥ . Then, it explicitly maintains distance estimates e d ( v ) for all vertices v ∈ V such that1. e d ( v ) ≥ dist G ( S, v ) ,2. if v ∈ ball G ( S, d ) , then e d ( v ) ≤ (1 + (cid:15) )dist G ( S, v ) ,3. Each e d ( v ) may only increase through time. For convenience, we slightly abuse the notation and denote
ApxBall ( G, S, d, (cid:15) ) = { v | e d ( v ) ≤ (1 + (cid:15) ) d } as the set of all vertices v whose distance estimate e d ( v ) is at most (1 + (cid:15) ) d . We think of24his set as the set that the data structure maintains. The next proposition relates the approximateball to the exact ball. Proposition II.2.2.
We have ball G ( S, d ) ⊆ ApxBall ( G, S, d, (cid:15) ) ⊆ ball G ( S, (1 + (cid:15) ) d ) . Moreover, ApxBall ( G, S, d, (cid:15) ) is a decremental set.Proof. If v ∈ ball G ( S, d ), then by Item 2 of Definition II.2.1 we have e d ( v ) ≤ (1 + (cid:15) )dist G ( S, v ) ≤ (1 + (cid:15) ) d . So v ∈ ApxBall ( G, S, d, (cid:15) ). For the other direction, if v ∈ ApxBall ( G, S, d, (cid:15) ), we have e d ( v ) ≤ (1+ (cid:15) ) d . Since e d ( v ) ≥ dist G ( S, v ) by Item 1 of Definition II.2.1, we have dist G ( S, v ) ≤ (1+ (cid:15) ) d and so v ∈ ball G ( S, (1 + (cid:15) ) d ). Finally, ApxBall ( G, S, d, (cid:15) ) is a decremental set because e d ( v ) mayonly increase through time.A classic ES-tree data structure [ES81] immediately gives a fast implementation for ApxBall for the small distance regime.
Proposition II.2.3 ([ES81]) . We can implement
ApxBall ( G, S, d, in O ( | ball G ( S, d ) | · d ) time.Remark II.2.4 . Given any static input graph G , and static set S , we define T ApxBall ( G, S, d, (cid:15) ) torefer to the worst-case total update time required by our data structure
ApxBall ( G , S , d, (cid:15) ) forany decremental graph G initially equal to G , and decremental set S initially equal to S . We alsosometimes abuse notation and let G, S be a decremental graph and set respectively, in which casewe only refer to their initial versions in T ApxBall ( G, S, d, (cid:15) ).Note that this definition of update time, allows us to immediately conclude that for any graphs G and G , and sets S and S where G ⊆ G and S ⊆ S , we have T ApxBall ( G, S, d, (cid:15) ) ≤ T ApxBall ( G , S , d, (cid:15) )since any worst-case instance incurring T ApxBall ( G, S, d, (cid:15) ) can be emulated by deleting G \ G and S \ S from G and S in the first stage respectively. This allows us to state times more compactly andcombine bounds. Note that the above in fact also implies T ApxBall ( G, S, d, (cid:15) ) ≤ T ApxBall ( G, S, d , (cid:15) )for any d ≤ d .We also assume that T ApxBall ( G, S, d, (cid:15) ) = Ω( | ball G ( S, d ) | ) which is true throughout the part. II.2.2 Robust Core
Given a set K ⊆ V of vertices of a graph G = ( V, E ), we informally call K a core set if its weakdiameter diam G ( K ) is small. That is, every pair of vertices in K are close to each other. In thedefinition below, recall that δ scatter = n o (1) is a global variable set in Section II.1. For intuition,think of str also as n o (1). Definition II.2.5. A robust core data structure RobustCore ( G, K init , D ) with a scattering pa-rameter δ scatter ∈ (0 , and a stretch str ≥ is given • a decremental graph G = ( V, E ) , and • an initial core set K init ⊂ V ( G ) where diam G ( K init ) ≤ D when the data structure is calledinitiallyand maintains a decremental set K ⊆ K init called core set until K = ∅ such that1. (Scattered): for each vertex v ∈ K init \ K , we have | ball G ( v, D ) ∩ K init | ≤ (1 − δ scatter ) ·| K init | , and2. (Low stretch): diam G ( K ) ≤ str · D . For convenience, we sometimes slightly abuse the notation and denote the maintained core set K = RobustCore ( G, K init , D ). Also, we introduce T RobustCore ( G, K init , D ) to refer to the totalupdate time required by our data structure implementing
RobustCore ( G, K init , D ).25
I.2.3 Covering
As mentioned before, the key ingredients of Approximate Ball and Robust Core can now be usedto define a Covering that we can implement efficiently. This is key building block of our interface.
Definition II.2.6.
Let G = ( V, E ) be a decremental graph and (cid:15) ≤ / . A ( d, k, (cid:15), str , ∆) -covering C of G is a collection of vertex sets called cores where each core C ∈ C is associated with othersets called the cover, shell, and outer-shell of C denoted by cover ( C ) , shell ( C ) , shell ( C ) ,respectively. We have the following1. Each core C is assigned a level ‘ core ( C ) = ‘ ∈ [0 , k − . All cores C from the same level arevertex disjoint.2. For each level ‘ we define d ‘ (cid:44) d · ( str (cid:15) ) ‘ and have(a) C = RobustCore ( G, C init , d ‘ ) with stretch at most str , and C init denotes C wheninitialized in C .(b) cover ( C ) = ApxBall ( G, C, d ‘ , (cid:15) ) and shell ( C ) = ApxBall ( G, C, str4 (cid:15) d ‘ , (cid:15) ) .(c) shell ( C ) = ball G ( C, str3 (cid:15) d ‘ ) .3. For every vertex v ∈ V , at all times there is a core C where v ∈ cover ( C ) . We say v is covered by C .4. At all times, each vertex v ∈ V can ever be in at most ∆ many outer-shells. That is, the totalnumber of cores C that v ∈ shell ( C ) over the whole update sequence is at most ∆ .We call d the distance scale, str the stretch parameter, k the level parameter, (cid:15) the accuracy param-eter, and ∆ the outer-shell participation bound. We note that the notion of outer-shells will be important later for path-reporting data structures,more specifically, in Lemma III.3.5 and Lemma III.4.8. The following observation reveals basicstructures of cores in the covering.
Proposition II.2.7.
For each core C ∈ C , the sets C , cover ( C ) , shell ( C ) and shell ( C ) aredecremental. Moreover, C ⊆ cover ( C ) ⊆ shell ( C ) ⊆ shell ( C ) .Proof. C is decremental by the guarantee of RobustCore . As C is decremental, cover ( C ) and shell ( C ) are decremental by Proposition II.2.2. Also, shell ( C ) must be decremental as both G and C are decremental. The moreover part follows by Proposition II.2.2 and the fact that(1 + (cid:15) ) str4 (cid:15) ≤ str3 (cid:15) as (cid:15) ≤ / II.2.4 (Covering-)Compressed Graphs
Given a covering C of G , we can define a natural bipartite graph H C associated with the covering C . We call this graph a Covering-Compressed Graph. Definition II.2.8 (Covering-Compressed Graph) . Let C be a ( d, k, (cid:15), str , ∆) -covering of a graph G = ( V, E ) at any point of time. A weighted covering-compressed graph of C denoted by H C =( V ∪ C , E ) is a bipartite graph where E = { ( v, C ) ∈ V × C | v ∈ shell ( C ) } . For each edge e = ( v, C ) ∈ E , the weight is w C ( e ) = l str · d ‘ core ( C ) + e d C ( v ) m (cid:15)d where e d C ( v ) is the distanceestimate of dist G ( C, v ) from the instance of ApxBall that maintains shell ( C ) . An (unweighted)covering-compressed graph H C of C is defined exactly the same but each edge in H C is unweighted.
26n other words, the unweighted core compressed graph H C is an incidence graph of the hy-pergraph on vertex set V where, for each core C , there is a hyperedge e containing all verticesin shell ( C ). Intuitively, if v ∈ shell ( C ), then w C ( e ) corresponds to the distances from e to avertex inside C : e d C ( v ) corresponds to the distance from v to the core C , while by the guaranteesof RobustCore (Definition II.2.5), str · d ‘ core ( C ) is an upper bound on the diameter of C . Remark
II.2.9 . The correspondence between the covering C and the (weighted and unweighted)covering-compressed graph H C of C is straightforward. Given an algorithm that maintains C , wecan assume that it also maintains H C for us as well.When we implement RobustCore data structure, we will exploit the covering-compressedgraph via a simple combinatorial property. Hence, we abstract this property out via a conceptcalled a compressed graph . Definition II.2.10 (Compressed Graph) . Let G = ( V, E ) be a decremental graph. We say that anunweighted hypergraph H is a ( d, γ, ∆)-compressed graph of G with distance scale d , gap parameter γ , and maximum all-time degree ∆ if the following hold: • if dist G ( u, v ) ≤ d , then u and v are adjacent in H . • if dist G ( u, v ) > d · γ , then u and v are not adjacent in H . • Throughout the update sequence on G , for each v ∈ V , the total number of edges in H everincident to v is at most ∆ . Recall that every unweighted bipartite graph represents some unweighted hypergraph. Thefollowing shows that the hypergraph view of any covering-compressed graph is indeed a compressedgraph.
Proposition II.2.11 (A Covering-Compressed Graph is a Compressed Graph) . Let C be an ( d, k, (cid:15), str , ∆) -covering of a graph G where ≤ str4 (cid:15) . Let H C be a covering-compressed graph of C . Then, the hypergraph view of H C is a ( d, γ, ∆) -compressed graph of G where γ = (str /(cid:15) ) k .Proof. Consider any u, v ∈ V ( G ) with dist G ( u, v ) ≤ d . Let C ∈ C be a core that covers u , i.e., u ∈ cover ( C ) = ApxBall ( G, C, d ‘ , . v ∈ shell ( C ). Let ‘ = ‘ core ( C ). We havedist G ( C, v ) ≤ dist G ( C, u ) + dist G ( u, v ) ≤ d ‘ · (1 .
1) + d ‘ ≤ d ‘ ≤ d ‘ ( str4 (cid:15) ). As cover ( C ) ⊆ shell ( C ),both u, v ∈ shell ( C ) and thus u and v are adjacent in H . Next, suppose that u and v are adjacent in H . Then, for some ‘ , there is a level- ‘ core C where u, v ∈ shell ( C ) = ApxBall ( G, C, str4 (cid:15) d ‘ , . G ( u, v ) ≤ · str4 (cid:15) d ‘ · . ≤ str (cid:15) d k − = d k = dγ . Lastly, as every vertex v ∈ V can ever be in atmost ∆ shells, the maximum all-time degree of v ∈ V is at most ∆.There is a trivial way to construct a (1 , , O (1))-compressed graph of a bounded-degree graphwith integer edge weights (recall that G is such a graph by the simplifying assumption in Proposi-tion II.1.2): Proposition II.2.12 (A Trivial Compressed Graph) . Let G be a bounded-degree graph G withinteger edge weights. Let G unit be obtained from G by removing all edges with weight greater thanone. Then, G unit is a (1 , , O (1)) -compressed graph of G . We will use the above trivial compressed graph in the base case of our data structure for verysmall distance scale. 27
I.2.5 Organization of the Part
In the remaining sections, we first present in Section II.3 an algorithm to maintain a Robust Coresince it is conceptually the most interesting component. We then show how to implement theCovering in Section II.4, which is the key building block of our interface and also requires severalnew ideas. In Section II.5, we show how to implement Approximate Balls. This section is rathertechnical and follows well-known techniques.Finally, we combine the components and set up the layering of our data structure in Section II.6.
II.3 Implementing Robust Cores
In this section, we show how to implement a robust core data structure
RobustCore for distancescale D , given a compressed graph for distance scale d (cid:28) D . We introduced Robust Cores alreadyin Definition I.2.4 in the overview for the special case of the theorem below when D = n o (1) . Theorem II.3.1 (Robust Core) . Let G be an n -vertex bounded-degree decremental graph. Supposethat a ( d, γ, ∆) -compressed graph H of G is explicitly maintained for us. We can implement arobust core data structure RobustCore ( G, K init , D ) with scattering parameter δ scatter = ˜Ω( φ cmg ) and stretch str core = ˜ O ( γ/φ cmg ) and total update time of ˜ O (cid:16) T ApxBall ( G, K init , D log n, . ( D/d ) /φ cmg (cid:17) . Remark
II.3.2 . We assume here that only edge deletions incident to ball G ( K init , D log n ) in theinitial graph are forwarded to the Robust Core data structure. When we use multiple Robust Coredata structures later on the same graph G , we assume that updates are scheduled effectively tothe relevant Robust Core data structure. We point out that such scheduling is extremely straight-forward to implement and therefore henceforth implicitly assumed. II.3.1 Algorithm
For this section, we remind the reader of the intuition provided for Robust Core provided in theoverview Section II.2.2 which provided the simplified Pseudo-Code 1. We present the full Pseudo-Code for Robust Core in Algorithm 3. We now discuss the algorithm in detail and state the formalguarantees that the various subprocedures achieve.
Constructing b H (Line 1). The algorithm starts by constructing a special graph b H that can bethought of as being the ( d, γ, ∆)-compressed graph H that is maintained for us, restricted to theset B init with the addition of some missing edges from G , where B init = ball G ( K init , D log n )is the static set of vertices that are in the ball around K init in the initial graph G . We define b H formally below. Definition II.3.3 (Heavy-Path Augmented Hypergraph) . Given a ( d, γ, ∆) -compressed graph H of G that is explicitly maintained for us, a set K init ⊆ V ( H ) , and a parameter D ≥ d .Then, let ˆ E ← { e ∈ E ( G [ B init ]) | d < w ( e ) ≤ D · log n } . Let b P be a collection of heavy paths where each edge e = ( u, v ) ∈ ˆ E corresponds to a u - v path b P e ∈ b P consisting of d w ( e ) /d e edges. Define b H be the union of H [ B init ] and all heavy paths b P (where internal vertices to each b P e are added as new vertices). We then say that a graph b H is the ( H, G, K init , d, D, γ, ∆) -heavy-path-augmented graph. Note that b H is an unweighted graph. lgorithm 3: RobustCore ( G, K init , D ) Input:
A ( d, γ, ∆)-compressed graph H of G that is explicitly maintained for us, a set K init ⊆ V ( H ), and a parameter D ≥ d . Construct (
H, G, K init , d, D, γ, ∆)-heavy-path-augmented graph b H . // see Def II.3.3 b V ← V ( b H ); K ← K init ; γ size ← · | b V | / | K init | . ∀ v ∈ K init , κ ( v ) ← ∀ w ∈ b V \ K init , κ ( w ) ← /γ size . // P v ∈ b V κ ( v ) = O ( | K init | ) .// As long as there exists a large core in K init . while CertifyCore ( G, K init , D, (cid:15) wit / returns Core K do // While low-diameter graph has some sparse cut, double the cut weight. while EmbedWitness ( b H, K , κ ) returns a vertex cut of ( L, S, R ) in b H do foreach cut-vertex v ∈ S do κ ( v ) ← κ ( v ). /* To ensure the technical side condition in Claim II.3.9. */ Let w be an arbitrary vertex from S maximizing κ ( w ); pick an arbitrary w = w from b V and do κ ( w ) ← max { κ ( w ) , κ ( w ) } . // Let P be the embedding that EmbedWitness ( · ) returned; and W thecorresponding witness. Let W multi be the multi-graph version of W . ( P , W ) ← EmbedWitness ( b H, K , κ ). Let (unweighted) multi-graph W multi be derived from W by adding w ( e ) /γ size copiesfor each e ∈ E ( W ) (Recall w ( e ) /γ size is an integer by guarantees of Lemma II.3.5) . // Maintain the witness W multi of b H until a lot of capacity is deleted. while X ⊆ K from running Prune ( W multi , φ cmg ) has size ≥ | K init | / do Maintain
ApxBall ( G, X, D, .
1) and remove every leaving vertex from K . K ← ∅ ; return The intuition for the heavy-path-augmented graph is quite simple. We would like to ensure thatfor any edge ( u, v ) ∈ G with w ( u, v ) ≤ D log( n ), u and v are also nearby in b H . If w ( u, v ) ≤ d then u and v are adjacent in H ⊆ b H by definition of H being a ( d, γ, ∆)-compressed graph. If d < w ( u, v ) ≤ D log( n ) then there exists a heavy path ˆ P from u to v with at most O ( D/d ) edges.Since we only deal with a single (
H, G, K init , d, D, γ, ∆)-heavy-path-augmented graph in the restof this part, we use b H to refer to this instance throughout. (We note that we assume throughoutthat b V = V ( b H ) is of size at least 2 since otherwise Robust Core is trivially implemented). Parameters:
In the description below, recall the three global variables we set in Section II.1.
Certifying a Large Core (Line 4).
After some further initialization takes place where inparticular we set K to be equal to K init , the main while-loop starting in Line 4 starts by checkingits condition. This task is delegated to a procedure CertifyCore ( · ) which either returns a largeset K ⊆ K init of small diameter (in G ) which is called the core K , or announces that all verticessatisfy the scattered property which allows us to set K to be the empty set and terminate. Theproof is deferred to Appendix A.2.1. Lemma II.3.4.
There is an algorithm
CertifyCore ( G, K, d, (cid:15) ) with the following input: an n -vertex graph G = ( V, E, w ) , a set K ⊆ V , an integer d > , and a parameter (cid:15) > . In time O (deg G (ball G ( K, d lg n )) log n ) , the algorithm either • (Scattered): certifies that for each v ∈ K , we have | ball G ( v, d ) ∩ K | ≤ (1 − (cid:15)/ | K | , or (Core): returns a subset K ⊆ K , with | K | ≥ (1 − (cid:15) ) | K | and diam G ( K ) ≤ d lg n . Embedding the Low-Diameter Graph (Line 5-8).
If a core K is returned by CertifyCore ( · ),then we use the procedure EmbedWitness ( · ) which either returns a large sparse vertex cut( L, S, R ) (with respect to κ and K ) or an embedding P that embeds a witness graph W in b H .Note that the entire reason of having the capacity function κ in the algorithm is to repeatedly findan embedding according to κ and to then argue about progress between two such embedding steps. Lemma II.3.5.
There is an algorithm
EmbedWitness ( H, K, κ ) that is given a hypergraph graph H = ( V, E ) , a terminal set K ⊆ V , and /z -integral vertex capacities κ : V → z Z ≥ such that κ ( v ) ≥ for all terminals v ∈ K and κ ( v ) ≤ κ ( V ) / for all vertices v ∈ V . (The integralityparameter z will appear in the guarantees of the algorithm.) The algorithm returns either • (Cut): a vertex cut ( L, S, R ) in H such that (cid:15) wit | K | ≤ | L ∩ K | ≤ | R ∩ K | and κ ( S ) ≤ | L ∩ K | ,where (cid:15) wit = φ cmg / log ( n ) is a parameter we will refer to in other parts of the paper; OR • (Witness): an embedding P that embeds a weighted multi-graph W into H with the followingguarantees: – W is a weighted Ω( φ cmg ) -expander. The vertex set V ( W ) is such that V ( W ) ⊆ K and | V ( W ) | ≥ | K | − o ( | K | ) . Each edge weight is a multiple of /z , where recall that z isthe smallest positive integer such that κ : V → z Z ≥ . The total edge weight in W is O ( | K | log | K | ) . Also, there are only o ( | K | ) vertices in W with weighted degree ≤ / . – The length of P and vertex congestion of P w.r.t. κ are at most O ( κ ( V ) log( κ ( V )) / ( | K | (cid:15) wit )) and O (log | K | ) , respectively. More precisely, each path in P has length at most O ( κ ( V ) log( κ ( V )) / ( | K | (cid:15) wit )) . For each vertex v ∈ V , P P ∈P v val( P ) = O ( κ ( v ) log | K | ) where P v is the set of paths in P containing v . Moreover, each path in P is a simplepath.The running time of the algorithm is ˜ O ( | H | κ ( V ) | K | φ cmg + zκ ( V ) /φ cmg ) , where | H | = P e ∈ E | e | and z isthe smallest positive integer such that κ : V → z Z ≥ . Recall here that there is an edge ( u, v ) of weight w ( u, v ) = val( P ) in W for every u - v path P in P . Intuitively, the lemma above guarantees that diam H ( V ( W )) is small because the length ofevery path in the embedding is small, and diam( W ) is small because W is an expander.In the algorithm, we invoke EmbedWitness ( · ) and if it returns a vertex cut, we double thecapacity function κ ( v ) for all vertices v in the cut set S . We also update some additional vertex inLine 7: this is just a blunt and simple way to enforce the technical side conditions of Claim II.3.9.Eventually, the doubling steps increase the potential enough to ensure that the witness graph W can be embedded into H . Maintaining the Witness and its Approximate Ball (Line 9-11).
We start in Line 9 byobtaining an unweighted version of W which we call W multi . This version is derived by scaling upedge weights in W so that each weight becomes an integer. Then, we replace edges with weightsby multi-edges each of unit weight.The above transformation from W to W multi is simply so that we can run the pruning subroutinebelow, which is restricted to unweighted graphs. Pruning allows us to maintain a large set X suchthat W multi [ X ] (and therefore also W [ X ]) remains an expander. Lemma II.3.6 ([SW19]) . There is an algorithm
Prune ( W, φ ) that, given an unweighted decremen-tal multi-graph W = ( V, E ) that is initially a φ -expander with m edges, maintains a decremental et X ⊆ V using ˜ O ( m/φ ) total update time such that W [ X ] is a φ/ -expander at any point of time,and vol W ( V \ X ) ≤ i/φ after i updates. As mentioned, we denote the maintained set after removing the pruned part by X = Prune ( W multi , φ ). Since W multi is only used to turn W into an unweighted graph whilepreserving all its properties (except number of edges), we refer in all proofs straight-forwardly to W and say that W is pruned, even when we really mean that W multi is pruned.Now, as long as a large set X exists, even as b H and therefore W undergoes edge updates, weroot an approximate ball ApxBall ( G, X, D, .
1) at the decremental set X . For every vertex thatleaves this approximate ball, we check whether it is in K still, and if so we remove it from K . II.3.2 Analysis
Throughout the analysis section, we let κ final denote the vertex capacity function κ taken when thealgorithm terminates. The following is the key lemma in our analysis. Lemma II.3.7.
At any point of time, the total vertex capacity in b H is κ ( b V ) ≤ κ final ( b V ) ≤ O (cid:18) | K init | Dd log ( n ) (cid:19) . The first inequality holds because κ ( b V ) can only increase through time by Line 6 of Algorithm 3.We defer the proof of the second inequality to the end of this section. However, we use this lemmabefore to establish correctness and update time. We also use throughout that κ is a monotonicallyincreasing function over time, which can be seen easily from the algorithm. Correctness.
We now establish the correctness, i.e. that K indeed forms a Robust Core asdefined in Definition II.2.5 and parameterized in Theorem II.3.1. Lemma II.3.8 (Correctness) . At any stage of Algorithm 3, the set K init and K satisfy1. (Scattered): for each vertex v ∈ K init \ K , we have | ball G ( v, D ) ∩ K init | ≤ (1 − δ scatter ) ·| K init | where recall that δ scatter = Θ( (cid:15) wit ) = ˜Ω( φ cmg ) .2. (Low stretch): diam G ( K ) ≤ str core · D where str core = ˜ O ( γ/φ cmg ) . (Scattered): Observe that every vertex v in K init is originally in K . Further, a vertex v canonly be removed from K in Line 11. But this in turn only occurs if v has its distance estimate from X larger than 4 D . Thus, dist G ( v, X ) > D (by the approximation guarantee of Definition II.2.1).It remains to observe that by Line 10 X ⊆ K ⊆ K init contains at least half the vertices in K init . This implies | ball G ( v, D ) ∩ K init | ≤ | K init | / < (1 − δ scatter ) · | K init | . Finally, observe thatprior to termination of the algorithm, we have that the while-condition in Line 4 was false, andtherefore CertifyCore ( G, K init , D, (cid:15) wit /
2) announced that the entire set K init is scattered (seeLemma II.3.4) by the choice of δ scatter = Ω( (cid:15) wit ) = ˜Ω( φ cmg ). This allows us to subsequently set K = ∅ and return. (Low stretch): We bound the diameter of K in two steps: first we bound diam b H ( X ) =˜ O ( Dd /φ cmg ), then we show that diam G ( X ) = O ( γd · diam b H ( X )). Combined, this establishes theLow Stretch Property since we enforce that vertices that leave ApxBall ( G, X, D, .
1) are removedfrom K , so diam G ( K ) = O (diam G ( X )).diam b H ( X ) = ˜ O ( Dd /φ cmg ): We have that by Lemma II.3.5 for EmbedWitness ( · ) that the lengthlen( P W ) of the embedding P W of W is at most O ( κ ( b V ) log( κ ( b V )) / ( | K | (cid:15) wit )). It is not hard to checkthat log( κ ( b V )) = O (log( n )) because we know by Lemma II.3.7 that κ ( b V ) ≤ κ final ( b V ), and it is easy31o see that κ final ( b V ) is polynomial in n because both | K init | and D are polynomial in n . We havethat len( P W ) = ˜ O ( κ ( b V ) / ( | K | (cid:15) wit )) . Thus, any u - v P path in W can be mapped to a corresponding u - v path in P W of length O ( | P | · len( P W )). This implies that diam b H ( X ) ≤ diam( W ) · len( P W ). We further have that W multi [ X ] forms an expander, and it is further well known that the diameter of an expander isupper bounded by O (log n ) over its expansion, and we therefore have diam( W multi ) = ˜ O (1 /φ cmg ).Also note that since diam( W multi ) is derived from W by copying edges, we have that the samestatement is true for W . Combining these insights, we obtaindiam b H ( X ) = ˜ O (1 /φ cmg ) · O (len( P W )) = ˜ O (cid:18) Dd / ( φ cmg (cid:15) wit ) (cid:19) where the last equality is by Lemma II.3.7 (recall κ ( b V ) ≤ κ final ( b V )). As (cid:15) wit = b Ω( φ cmg ) byLemma II.3.5, we have diam b H ( X ) = ˜ O ( Dd /φ cmg ).diam G ( X ) = O ( γd · diam b H ( X )): For any u, v ∈ X ⊆ K init ⊆ V , consider a u - v shortest path P in b H . Observe that since u, v are vertices in G , we have that P is formed from (entire) heavy paths(corresponding to edges of weight ≥ d in G ) and edges in H .For each heavy path P on P , we have that it is of length at most O ( d ) times the original path(recall, we round the weight of the edge in G by d and insert a path of the corresponding length).On the other hand, any edge ( u , v ) in H has dist G ( u , v ) ≤ γd by definition. The latter factorsubsumes the former and establishes our claim.We also need to prove that the side conditions of EmbedWitness ( · ) hold throughout theexecution of the algorithm. The proof is deferred to Appendix A.2.3. Claim II.3.9 (Side-Conditions) . Whenever the algorithm invokes
EmbedWitness ( · ) , we have1. κ ( v ) ≥ for all terminals v ∈ K ,2. κ ( v ) ≤ κ ( V ) / . Total Update Time.
As we have proven the correctness of the algorithm, it remains to analyzethe total update time.
Lemma II.3.10.
The total number of while-loop iterations starting in Line 4 is at most ˜ O ∆ κ final ( b V ) | K init | /φ cmg ! . Proof.
First, we observe that the total weight of edges that are ever deleted from any of the witnessgraphs W is at most O (∆ κ final ( b V ) log( n )). To see this, recall first that the weight of an edge( u, v ) in a graph W (associated with embedding P ) is equal to P P ∈ P uv val( P ) where P uv is theset of u - v paths in P . Now observe that whenever an edge ( v, e ) ∈ E ( b H bip ) of the incidence graph b H bip of b H is deleted where v ∈ b V and e ∈ E ( H ), the total value of the paths P ve containing theedge ( v, e ) is at most O ( κ ( v ) log( n )) = O ( κ final ( v ) log( n )) by the guarantee on vertex congestion of EmbedWitness ( · ) from Lemma II.3.5. Further such an edge ( v, e ) once deleted does not occur inany future witness graph W . But there are at most ∆ + 3 edges incident to v in all versions of b H (∆ from H , 3 from G ). But this bounds the total weight ever deleted from all graphs W by O (∆ κ final ( b V ) log( n )). 32n the other hand, we claim that during a while-loop iteration, at least κ del = φ cmg | K init | / W . Assume for the sake of contradiction that this is not true. Observe firstthat we build W to initially have | K | − o ( | K | ) ≥ | K init | − o ( | K init | ) vertices with weighted degreeat least 9 /
10 (see the while-loop condition in Line 4 and the guarantees on
EmbedWitness ( · )from Lemma II.3.5). But deleting κ del from W causes Prune ( W multi , φ cmg ) to ensure that set X is such that vol W ( V ( W ) \ X ) ≤ κ del /φ cmg = | K init | /
4. This in turn implies that at most · | K init | / ≤ | K init | / /
10 are in V ( W ) \ X . Therefore, | X | ≥| K init | − o ( K init ) − | K init | / ≥ | K init | /
2. But this contradicts that the while-loop iteration is oversince the condition of the while-loop in Line 10 is still satisfied.By using the second claim to charge the sum from the first claim, we establish the lemma.
Lemma II.3.11.
The total number of times
EmbedWitness is called is at most ˜ O (∆ κ final ( b V ) | K init | /φ cmg ) .Proof. Every time
EmbedWitness returns a vertex cut (
L, S, R ), we double the capacity κ ( v )of every vertex v ∈ S . So the total capacity is increased by κ ( S ) ≥ | L ∩ K init | ≥ (cid:15) wit | K init | by Lemma II.3.5. Further, in Line 7, we only further increase κ . But since κ final ( b V ) is the to-tal final capacity, we have that there can be at most O ( κ final ( b V ) (cid:15) wit | K init | ) = ˜ O ( κ final ( b V ) φ cmg | K init | ) times that EmbedWitness ( · ) returns a vertex cut.The number of times that EmbedWitness returns an embedding is at most the number ofwhile-loop iterations which is ˜ O (∆ κ final ( b V ) | K init | /φ cmg ) by Lemma II.3.10. By summing the number oftimes from the two cases, the lemma holds. Lemma II.3.12.
The total running time of Algorithm 3 is ˜ O T ApxBall ( G, K init , D log n, . (cid:18) Dd (cid:19) /φ cmg ! . Initialization: It is straight-forward to see that the initialization (i.e. the first two lines in Algo-rithm 3) can be executed in O ( | b H | ) by using an invocation of Dijkstra and some basic operations.A Single Iteration of the While-Loop starting in Line 4 (Excluding EmbedWitness ( · )): Thewhile-loop condition (and computing K ) in Line 4, is checked using CertifyCore ( · ) which takes˜ O (cid:0)(cid:12)(cid:12) ball G ( K init , D log n ) (cid:12)(cid:12)(cid:1) time by Lemma II.3.4.The time spent on Prune ( W multi , φ cmg ) is bound by Lemma II.3.6 to be O ( | E ( W multi ) | /φ cmg ).We then use the fact that W multi has at most O ( γ size | K init | log | K init | ) edges because it is derivedfrom W by making w ( e ) /γ size copies of each edge e in W where we established that the total weightof all edges in W is O ( | K init | log | K init | ) by Lemma II.3.5. As γ size = | b V | / | K init | , we thus have that | E ( W multi ) | = ˜ O ( | b V | ) and therefore the time spent during a while-loop iteration on pruning is atmost O ( | E ( W multi ) | /φ cmg ) = ˜ O ( | b V | /φ cmg ).Finally, we have to account for the time required to maintain ApxBall ( G, X, D, .
1) which is T ApxBall ( G, X, D, . ≤ T ApxBall ( G, K init , D log n, . ApxBall in Remark II.2.4.All other operations during the while-loop have time subsumed by the former procedures (orthe invocations of
EmbedWitness ( · )) giving total time˜ O (cid:16)(cid:12)(cid:12)(cid:12) ball G ( K init , D log n ) (cid:12)(cid:12)(cid:12) + ˜ O ( | b V | /φ cmg ) + T ApxBall ( G, X, D, . (cid:17) = ˜ O ( T ApxBall ( G, K init , D log n, .
1) + | b V | /φ cmg ) , (II.1)where we used that T ApxBall ( G, K init , D log n, .
1) = Ω( | ball G ( K init , D log n ) | ), as discussedin Remark II.2.4. 33ll Iterations of the While-Loop starting in Line 4 (Excluding EmbedWitness ( · )): As there areat most ˜ O (∆ κ final ( b V ) | K init | /φ cmg ) while-loop iterations by Lemma II.3.10, the total time spent (excludingtime spent on EmbedWitness ) is˜ O ( T ApxBall ( G, K init , D log n, . κ final ( b V ) | K init | /φ cmg + | b H | ∆ κ final ( b V ) | K init | /φ cmg )where we used | b V | ≤ | b H | in the last term.Time spent on EmbedWitness ( · ): It is not hard to see that κ is a 1 /γ size -integral function,with γ size = | b V | / | K init | . Therefore, each call to EmbedWitness ( · ) in Line 8 takes time˜ O ( | b H | κ ( b V ) | K init | φ cmg + γ size · κ ( b V ) /φ cmg ) = ˜ O ( | b H | κ final ( b V ) | K init | φ cmg + | b V | κ final ( b V ) | K init | φ cmg )because κ ( b V ) ≤ κ final ( b V ). We can assume w.l.o.g. that | b V | = O ( | b H | ) since the only way thiscould be false is if half the vertices of b V were isolated (i.e. had no incident edges), in which casea sparse cut in b H could trivially be found by computing connected components in | b H | time Wecan thus simplify the above bound to ˜ O ( | b H | κ final ( b V ) | K init | φ cmg ). Finally, we note that by Lemma II.3.11,there are at most ˜ O (∆ κ final ( b V ) | K init | /φ cmg ) calls to EmbedWitness ( · ). Therefore, the total time spenton EmbedWitness ( · ) is at most ˜ O | b H | ∆ κ final ( b V ) | K init | ! /φ cmg . Combining Calculations: By combining the two bounds above, the total time including the timespent on
EmbedWitness is at most˜ O ( T ApxBall ( G, K init , D log n, . κ final ( b V ) | K init | /φ cmg + | b H | ∆( κ final ( b V ) | K init | ) /φ cmg ) . To simplify this expression, we have κ final ( b V ) | K init | = O ( Dd log ( n )) by Lemma II.3.7 and also | b H | =˜ O (∆ Dd (cid:12)(cid:12) ball G ( K init , D log n ) (cid:12)(cid:12) ) which can be verified by checking Definition II.3.3 of b H from H and G (where each edge in G might result in ˜ O ( D/d ) new heavy-path edges in b H and where wehave constant degree by assumption). Therefore, the expression can be bounded by˜ O T ApxBall ( G, K init , D log n, . (cid:18) Dd (cid:19) /φ cmg ! . as claimed. (Here we used that T ApxBall ( G, K init , D log n, .
1) = Ω( | ball G ( K init , D log n ) | ), asdiscussed in Remark II.2.4.) Final Total Capacity.
Finally, we bound the final total vertex capacity κ final ( b V ) of b H as claimedin Lemma II.3.7. Unfortunately, it is rather difficult to argue directly about b H since it is fully-dynamic. To establish our proof, we therefore rely on analyzing another graph b G which is usedpurely for analysis.We define b G to be a dynamic unweighted graph with vertex set V ( b G ) = b V and the edgeset E ( b G ) taken to be the union of the edges { ( u, v ) ∈ B init × B init | dist G ( u, v ) ≤ d } and alledges on heavy paths b P that were also added to b H (recall Definition II.3.3 and the definition B init = ball G ( K init , D log( n ))).We first list structural properties of b G below:34 roposition II.3.13. We have the following:1. b G is a decremental graph.2. For any u, v ∈ K init , if dist G ( u, v ) ≤ D log n , then dist b G ( u, v ) ≤ · d dist G ( u, v ) /d e .3. If ( L, S, R ) is a vertex cut in b H , then ( L, S, R ) is also a vertex cut in b G . Property 1: Observe that since G is a decremental graph, distances in G are monotonicallyincreasing. Thus, the set { ( u, v ) ∈ B init × B init | dist G ( u, v ) ≤ d } is decremental. Further, recallthat we assume that G is undergoing edge deletions (no weight updates) and once an edge e isdeleted from G its corresponding heavy path P e ∈ b P (if one is associated with e ) is simply deletedfrom b G . Thus, b G is a decremental graph.Property 2: Let P be a shortest u - v path in G . Let E heavy = { e ∈ P | w ( e ) > d } . We canpartition the path P into P = P ◦ e ◦ P ◦ · · · ◦ e | E heavy | ◦ P | E heavy | +1 where each e i ∈ E heavy andeach path P i contains only edges in G with weight at most d . It remains to observe that we canreplace• each u i - v i path P i in G by finding a minimal set S i of vertices on P i with u i , v i ∈ S i suchthat each vertex x in S i \ { v i } is at most at distance d to some vertex that occurs later on P i than x . Then, we can replace the path between each such two consecutive vertices by anedge in { ( u, v ) ∈ B init × B init | dist G ( u, v ) ≤ d } and it is not hard to see that we use at most2 · d dist G ( u i , v i ) /d e such edges in b G , and• each edge e i by a heavy path in b G consisting of d w ( e i ) /d e edges (recall heavy paths fromDefinition II.3.3).It is not hard to combine the above two insights to derive the Property. We point out that abovewe implicitly use that all vertices on P are in B init . But this is clearly given since we assume u, v ∈ K init and dist G ( u, v ) ≤ D log n while B init includes all vertices in G that are ever atdistance at most 32 D log n to any vertex in K init .Property 3: We prove the contra-positive. Suppose that ( L, S, R ) is not a vertex cut in b G . Thatis, there is an edge ( u, v ) in b G where u ∈ L and v ∈ R . There are two cases. First, if ( u, v ) isin a heavy path b P in b G , then b P must appear in b H as well. Second, if dist G ( u, v ) ≤ d , then, byDefinition II.2.10, there is a hyperedge of a ( d, γ, ∆)-compressed graph H that contains both u and v . Therefore, ( L, S, R ) is not a vertex cut in b H .We now define a powerful potential function to complete our proof. The key notion for ourpotential function is that of a cost of an embedding. In the definition below, it is important toobserve that while we have κ and γ size defined by Algorithm 3, the embedding P can be chosenarbitrary (and in particular does not have to be P from the algorithm). Given this definition it isstraight-forward to set-up our potential function. Definition II.3.14 (Cost of an Embedding) . At any point during the execution of Algorithm 3,consider κ and γ size = | b V | / | K init | , and consider any embedding P that embeds some W into b G .Then, we define the cost of the embedding P by c ( P ) = P v ∈ P,P ∈P log( γ size κ ( v )) · val( P ) . Definition II.3.15 (Potential Function) . At any point during the execution of Algorithm 3, let P be a collection of all embeddings P that embed a graph W into b G that satisfies that1. W is an unweighted star where V ( W ) ⊆ K init and | V ( W ) | ≥ (1 − (cid:15) wit / | K init | , and2. diam b G ( V ( W )) ≤ · Dd · log n .We then define the potential function Π( b G, K init , κ ) = min P ∈ P c ( P ) that is equal to the minimalcost achieved by any embedding in P . Here, if P = ∅ , then we let Π( b G, K init , κ ) = ∞ . P ∈ P and P ∈ P above, we have val( P ) = 1 (since W is unweighted).Also note that we do not have any guarantees on vertex congestion or length of the embeddingsfor any P .Let us now analyze the potential function Π( b G, K init , κ ) over the course of the algorithm.
Proposition II.3.16. Π( b G, K init , κ ) ≥ and Π( b G, K init , κ ) can only increase through time.Proof. Π( b G, K init , κ ) ≥ v ∈ b V , we have κ ( v ) ≥ /γ size and so log( γ size κ ( v )) ≥ b G is a decremental graph by Property 1 in Proposition II.3.13, we have that Π( b G, K init , κ ) mayonly increase.
Proposition II.3.17.
For all v ∈ b V , we have κ ( v ) ≤ | K init | at any point of time.Proof. The capacity κ ( v ) of v can be increased only if a cut ( L, S, R ) is returned by
EmbedWitness ( · )with v ∈ S in Line 6. But EmbedWitness ( · ) guarantees that κ ( S ) ≤ | K init | (see the Cut Prop-erty in Lemma II.3.5). So once κ ( v ) > | K init | , v cannot be in any future S . Since we double κ ( v )every time that v appears in S , we can therefore ensure that κ ( v ) ≤ | K init | . Lemma II.3.18.
When the invocation of
CertifyCore ( G, K init , D, (cid:15) wit / in Line 4 returns aCore K ⊆ K init , then Π( b G, K init , κ ) = O ( | K init | Dd log n ) .Proof. By Lemma II.3.4, we have that such K satisfies that | K | ≥ (1 − (cid:15) wit / | K init | anddiam G ( K ) ≤ D log n . Using the latter fact, combined with Property 2 from Proposition II.3.13,we have diam b G ( K ) ≤ d diam G ( K ) /d e ≤ · Dd · log n .Using the last fact, with the guarantee on the size of K , we note that picking an arbitraryvertex u ∈ K , and letting P be an embedding containing for each v ∈ K \ { u } , a shortest u - v path P in b G with value val( P ) = 1, we get that P must be in P as defined in Definition II.3.15. Itis further straight-forward to see that c ( P ) = X x ∈ P,P ∈P log( γ size κ ( x )) < | K init | · diam b G ( K ) · log( γ size · | K init | ) = O (cid:18) | K init | Dd log n (cid:19) because there are | K init | − P , each path is of length at most diam b G ( K ), and each vertex x has κ ( x ) bound by Proposition II.3.17. This completes the proof as Π( b G, K init , κ ) ≤ c ( P ). Lemma II.3.19.
Consider when
EmbedWitness ( b H, K , κ ) returns a vertex cut ( L, S, R ) in b H .Let κ OLD and κ NEW be the vertex capacities of b V before and after the doubling step in Line 6 andthe potential increase of κ ( w ) in Line 7. Then,1. κ NEW ( b V ) ≤ κ OLD ( b V ) + 6 | L ∩ K | .2. Π( b G, K init , κ
NEW ) ≥ Π( b G, K init , κ ) + | L ∩ K | / . Property 1: We have that Line 6 leads to an increase in capacity from κ OLD ( S ) to 2 κ OLD ( S )at the vertices on S while the capacity at b V \ S remains unchanged. In Line 7, we set the capacityof w at most to the current capacity at S , i.e. at most 2 κ OLD ( S ). Thus, we have κ NEW ( b V ) ≤ κ OLD ( b V ) + 3 κ OLD ( S ) where κ OLD ( S ) ≤ | L ∩ K | by Lemma II.3.5.Property 2: First, recall that by Property 3 in Proposition II.3.13, the vertex cut ( L, S, R ) in b H is also a vertex cut in b G . Now, given any embedding P from P (as defined in Definition II.3.15)that embeds W into b G , we define L = L ∩ K ∩ V ( W ) and analogously R = R ∩ K ∩ V ( W ).Further, let v be the center of the star W , then if36 v ∈ L : we have that there are at least | R | paths in P from v to R (in b G ). But by definitionof the vertex cut ( L, S, R ) in b G , each of these paths must contain at least one vertex in S .• v L : then v ∈ S ∪ R , but this implies that there are at least | L | paths in P from v to L ,thus containing at least one vertex in S .As we double the capacity of every vertex in S and P ∈ P is chosen arbitrarily, we have thus proventhat Π( b G, K init , κ ) is increased by at least min {| R | , | L |} . Thus, if we could lower bound | R | , | L | to be of size at least | L ∩ K | /
3, then the property would be established.Therefore, we note that by
EmbedWitness ( b H, K , κ ) from Lemma II.3.5, we have | R ∩ K | ≥| L ∩ K | ≥ (cid:15) wit | K | = (cid:15) wit | K init | − o ( (cid:15) wit | K init | ) where the later equality is by the while-loopcondition in Line 4. Then, since K ⊆ K init and at most ( (cid:15) wit / | K init | vertices in K init are not in V ( W ) (by Definition II.3.15), we further obtain that | L ∩ K ∩ V ( W ) | ≥ | L ∩ K |− ( (cid:15) wit / | K init | > | L ∩ K | / | R ∩ K ∩ V ( W ) | ≥ | L ∩ K | /
3. The property is thus established.Now, we are ready to give the upper bound on the final total vertex capacity κ final ( b V ) of b H asclaimed in Lemma II.3.7. Lemma II.3.20.
At any point of time, we have κ final ( b V ) ≤ O (cid:16) | K init | Dd log ( n ) (cid:17) .Proof. Throughout the algorithm, κ ( b V ) is only changed in Line 6 of the algorithm after an invoca-tion of EmbedWitness ( · ) returns a vertex cut ( L, S, R ) in b H . But by Lemma II.3.19, every time κ ( b V ) is increased by amount x , the potential Π( b G, K init , κ ) is increased by at least x/ b G, K init , κ ) is initially non-negative (see Proposition II.3.16) and never exceeds O ( | K init | Dd log ( n )) (by Lemma II.3.18). Hence the total increase of κ ( b V ) is also bound by O ( | K init | Dd log ( n )), combined with the initial capacity of κ ( b H ) ≤ | K init | · | b V | · /γ size = O ( | K init | ) (see Line 3) this establishes the Lemma. II.4 Implementing Covering
Building on the previous two data structures (for Approximate Balls and Robust Cores), we are nowready to give our implementation of a Covering data structure. We recall from Definition II.2.6that a ( d, k, (cid:15), str , ∆)-covering C is a dynamic collection of cores C ∈ C where each core C is aRobust Core such that C = RobustCore ( G, C init , d ‘ ) where ‘ ∈ [0 , k −
1] is the level assigned to C (we also write ‘ core ( C ) = ‘ ) and where d ‘ = d · ( str (cid:15) ) ‘ . Observe that this implies that we alwayshave d ≤ d ‘ core ( C ) ≤ d (str /(cid:15) ) k − for any C ∈ C .For intuition, the reader should keep in mind that we intend to use the Theorem below for k ∼ log log( n ) and str such that ( str (cid:15) ) k = n o (1) . Theorem II.4.1 (Covering) . Let G be an n -vertex bounded-degree decremental graph. Given pa-rameters d, k, (cid:15), str , δ scatter where (cid:15) ≤ . , and • for all d ≤ d ≤ d ( str (cid:15) ) k , there is a approximate ball data structure ApxBall ( G, S, d , (cid:15) ) withtotal update time T ApxBall ( G, S, d , (cid:15) ) , and • for all d ≤ d ≤ d ( str (cid:15) ) k − , there is a robust core data structure RobustCore ( G, K init , d ) with scattering parameter at least δ scatter and stretch at most str that has total update time T RobustCore ( G, K init , d ) .We can maintain ( d, k, (cid:15), str , ∆) -covering of G with ∆ = O ( kn /k /δ scatter ) in total update time O ( kn /k log( n ) /δ scatter + X C ∈C ALL T RobustCore ( G ( t C ) , C ( t C ) , d ‘ core ( C ) )+ T ApxBall ( G ( t C ) , C ( t C ) , str4 (cid:15) d ‘ core ( C ) , (cid:15) ))37 here C ALL contains all cores that have ever been initialized and, for each C ∈ C ALL , t C is thetime C is initialized and added to C . We guarantee that P C ∈C ALL | ball G ( tC ) ( C ( t C ) , str4 (cid:15) d ‘ core ( C ) ) | ≤ O ( kn /k /δ scatter ) . Algorithm 4:
Covering ( G, d, k, (cid:15) , str) /* While there exists a vertex v ∈ V not covered by any core in C . */ while ∃ v, ∀ C ∈ C , v / ∈ cover ( C ) do Let ‘ be the smallest integer with | ball G ( v, d ‘ +1 ) | ≤ n ( ‘ +1) /k . C init ← ball G ( v, d ‘ ) . Maintain core set C = RobustCore ( G, C init , d ‘ ), cover ( C ) = ApxBall ( G, C, d ‘ , (cid:15) )and shell ( C ) = ApxBall ( G, C, str4 (cid:15) d ‘ , (cid:15) ). ‘ core ( C ) = ‘ . Add core C to covering C . When C becomes equal to ∅ (because RobustCore ( · )terminates), remove C from C and stop maintaining cover ( C ) and shell ( C ).The algorithm for maintaining the covering is described in Algorithm 4. It is rather straight-forward: whenever there is a vertex v that is not covered by any core in C , then we make v (togetherwith some vertices in the ball centered around v to some carefully chosen radius) a core C init itself.We first describe the basic guarantee of Algorithm 4. Proposition II.4.2.
We have the following:1. A level ‘ assigned to each core C is between and k − .2. Every vertex is covered by some core.3. At any stage, all cores C from the same level are vertex disjoint.Proof. (1): Otherwise, there is a vertex v such that | ball G ( v, d k ) | > n which is impossible. (2): Thisfollows directly from Algorithm 4. (3): Since every core C is a decremental set by the guarantee of RobustCore , it is enough to show that whenever a core C = ball G ( v, d ‘ ) is initialized with level ‘ , C is disjoint from other cores C with level ‘ . This holds because v is not covered by any level- ‘ core C and so dist G ( C , v ) > d ‘ . So ball G ( v, d ‘ ) is disjoint from all level- ‘ cores C .Therefore, to show that an ( d, k, (cid:15), str , ∆)-clustering C of G is maintained, it remains the bound∆, i.e., the number of outer-shells each vertex v can ever participate in. To do this, we first provean intermediate step that bounds the number of cores a vertex can participate. Lemma II.4.3.
For each level ‘ , each vertex v can ever participate in at most O ( n /k /δ scatter ) many level- ‘ cores.Proof. We prove the lemma by charging the number of vertices in B v = ball G ( v, d ‘ ).We first observe that initially, i.e. at the first time that v is added to a level- ‘ core, we have that | B v | ≤ n ( ‘ +1) /k . This follows since when v is first added to some C init = ball G ( u, d ‘ ) in Line 3 ofAlgorithm 4, the ball of the core is centered at some vertex u . But we clearly have ball G ( u, d ‘ ) ⊆ B v .On the other hand, the algorithm ensures by choice of ‘ in Line 2 that | ball G ( u, d ‘ +1 ) | ≤ n ( ‘ +1) /k but we also have that B v ⊆ ball G ( u, d ‘ +1 ) which establishes the claim.Next, recall from Proposition II.4.2 that all level- ‘ cores are vertex disjoint. Thus, the cores C , C , . . . , C τ that v participates in over time have the property that each core C i +1 is initializedonly after v has left core C i . Now consider some core C i , that was initialized to C initi = ball G ( u, d ‘ ),38.e. the ball centered at some u (as discussed above). Observe that | ball G ( u, d ‘ ) | > n ‘/k by theminimality of ‘ (see again Line 2).But we have that C initi was in B v when v was added to C i . Further, when v leaves C i , we haveby the definition of RobustCore (see in particular the Scattered Property in Definition II.2.5,and the parameters used in Line 4) that only | B v ∩ C initi | ≤ (1 − δ scatter ) | C initi | of the vertices in C initi are still in B v . Combined, this implies that Ω( n ‘/k δ scatter ) vertices are leaving B v in between v joining C i and C i +1 for every i .Using that initially | B v | ≤ n ( ‘ +1) /k , we derive that the number of level- ‘ cores that v canparticipate in is τ ≤ n ( ‘ +1) /k Ω( n ‘/k δ scatter ) = O ( n /k /δ scatter ) . Now, we are ready to prove that ∆ = O ( kn /k /δ scatter ). Lemma II.4.4.
For each level ‘ , each vertex v can ever participate in at most O ( n /k /δ scatter ) manyouter-shells of level- ‘ cores. Thus, over all levels, v can participate in at most O ( kn /k /δ scatter ) many outer-shells.Proof. We again use an argument where we charge B v for a specific level ‘ . However, this time welet the radius of v be twice the radius of a shell at level ‘ (and also larger by a small fraction thanthat of an outer-shell), i.e. we define B v = ball G ( v, str2 (cid:15) d ‘ ).Let C , C , . . . , C τ be the cores that have v in their outer-shell (let them be ordered increasinglyby their initialization time). Since each core C i is decremental, if v is ever in the outer-shell shell ( C i ), then it is also in the outer-shell of C i upon C i ’s initialization. I.e. then v ∈ shell ( C initi ).Note that when v is added to the outer-shell shell ( C init ) of C init = ball( u, d ‘ ) then at thatstage we also have that | ball( u, d ‘ +1 ) | ≤ n ( ‘ +1) /k (by Line 2). But this implies that | B v | ≤ n ( ‘ +1) /k since B v can only include vertices at distance at most dist G ( v, u ) + str2 (cid:15) d ‘ ≤ d ‘ +1 from u .We now use a slightly more advanced charging scheme than in Lemma II.4.3. To this end,consider the process where we, for every C i , charge every vertex w ∈ C initi a single credit. We notefirst, that by our analysis above there are at most n ( ‘ +1) /k vertices that can ever pay a credit sincecores that are not fully contained in B v when C is initialized cannot have v in their outer-shell(this follows by a straight-forward application of the triangle inequality). Further, each vertex w isin at most O ( n /k /δ scatter ) level- ‘ cores by Lemma II.4.3. This bounds the total number of availablecredits by O ( n ( ‘ +2) /k /δ scatter ).But on the other hand, each core C i at level ‘ has an initial set C initi of size at least n ‘/k byminimality of ‘ in Line 2 when C i is initialized. But then each such core charges at least n ‘/k credits in the above scheme. The bound follows.Finally, we finish with the running time analysis.39 emma II.4.5. The total update time of Algorithm 4 is at most O ( kn /k log n/δ scatter + X C ∈C ALL T RobustCore ( G ( t C ) , C ( t C ) , d ‘ core ( C ) )+ T ApxBall ( G ( t C ) , C ( t C ) , str4 (cid:15) d ‘ core ( C ) , (cid:15) )) where C ALL contains all cores that have ever been initialized and, for each core C ∈ C ALL , t C is thetime C is initialized. We guarantee that P C ∈C ALL | ball G ( tC ) ( C ( t C ) , str4 (cid:15) d ‘ core ( C ) ) | ≤ O ( kn /k /δ scatter ) . Proof.
To implement Algorithm 4, for each vertex v , we will maintain the lists core v = { C | v ∈ C } , cover v = { C | v ∈ cover ( C ) } , and shell v = { C | v ∈ shell ( C ) } . As all cores C and their covers and shells are maintained explicitly by RobustCore and
ApxBall , the time formaintaining these lists are subsumed by the total update time of
RobustCore and
ApxBall .Given an edge update ( u, v ), we only need to generate the update ( u, v ) to all data structures
RobustCore and
ApxBall on the cores C where C ∈ shell u ∪ shell v . By Lemma II.4.4, thetotal number of generated updates is at most O ( kn /k /δ scatter ).From the collection of lists { cover v } v ∈ V , we can report whenever there is a vertex v which isnot covered by any core.Suppose that at time t there is such a vertex v and we initialize a core C with level ‘ . InLine 3, starting from ‘ = 0, we compute ball G ( t ) ( v C , d ‘ +1 ) by running Dijkstra, and as long as | ball G ( t ) ( v C , d ‘ +1 ) | > n ‘/k , we set ‘ ← ‘ +1 and continue the Dijkstra’s algorithm. The total runningtime is O ( | ball G ( t ) ( v C , d ‘ +1 ) | log n ) = O ( | C init | n /k log n ). In Line 4, RobustCore is initializedfor maintaining C using T RobustCore ( G ( t ) , C init , d ‘ ) total update time. In Line 4, ApxBall isinitialized for maintaining cover ( C ) and shell ( C ) using at most 2 · T ApxBall ( G ( t ) , C init , str4 (cid:15) d ‘ , (cid:15) )total update time. We assign t C ← t for this core C . Note that C init = C ( t C ) . Therefore, the totalupdate time is can be written as O ( X C ∈C ALL [ | C ( t C ) | n /k log( n ) /δ scatter + T RobustCore ( G ( t C ) , C ( t C ) , d ‘ core ( C ) )+ T ApxBall ( G ( t C ) , C ( t C ) , str4 (cid:15) d ‘ core ( C ) , (cid:15) )]) . By Lemma II.4.3, we have that P C ∈C ALL | C ( t C ) | = O ( kn /k /δ scatter ). Also, by Lemma II.4.4, wehave X C ∈C ALL | ball G ( tC ) ( C ( t C ) , str4 (cid:15) d ‘ core ( C ) ) | ≤ O ( kn /k /δ scatter )because ball G ( tC ) ( C ( t C ) , str4 (cid:15) d ‘ core ( C ) ) ⊆ shell ( C ( t C ) ) and shell ( C ) is decremental. II.5 Implementing Approximate Balls
In this section, we derive the
ApxBall data structure. Here, we use standard techniques from theliterature with small adaptions to deal with our compressed graphs.
Theorem II.5.1 (Approximate Ball) . Let G be an n -vertex bounded-degree decremental graph.Let (cid:15) ≤ . . Suppose that a ( d, k, (cid:15), str , ∆) -covering C of G is explicitly maintained for us. We canimplement an approximate ball data structure ApxBall ( G, S, D, (cid:15) ) using ˜ O ( | ball G ( S, D ) | ∆ D(cid:15)d ) + T ApxBall ( G, S, str (cid:15) ) k d, (cid:15) ) total update time. ntuition for Theorem II.5.1. Let us offer some intuition on the theorem above and thebounds derived. Consider the statement of the classic ES-trees (with weight rounding): Givena decremental graph G with minimum weight λ , decremental set S and depth Λ, we can maintain ApxBall ( G , S, Λ , (cid:15) ) in time ˜ O ( | E (ball G ( s, Λ)) | · Λ (cid:15)λ ).Now, assume for the sake of simplicity that G is unweighted and that the covering-compressedgraph H C = ( V ∪ C , E ) is a decremental graph (i.e. that no new core needs to be added to thecovering throughout the entire update sequence). Then, consider running the ES-tree from S onthe graph H C and run it to depth D . It is not hard to see that this ES-tree runs on the edge set E (ball H C ( S, D )) which is of size at most | ball H C ( S, D ) | ∆ since each vertex in H C is incident to atmost ∆ edges in the entire update sequence.To reduce the run-time by a factor of d , we increase all edge weights to be at least d . To boundthe total additive error introduced by this rounding, we observe that given any vertex t we cantake the following S − t path in H C : π ( S, t ) = h v = S, v , . . . , v k , v k +1 = t i in H C where v i is the( i · d ) th vertex on π G ( S, t ) (except for v k ) – here, π G ( S, t ) is the shortest S − t path in G . That is,every v i and v i +1 are at distance exactly d (except for i = k where the distance is smaller). All butthe last edge on this path already has weight d , so increasing edge weights to d has no effect. Thelast edge might incur an additive error of d , but as long as the distance from S to t is at least d/(cid:15) ,this additive can be subsumed in a multiplicative (1 + O ( (cid:15) )) error.We conclude that we can run the ES-tree above in running time O ( | ball G ( S, D ) | ∆ D(cid:15)d ). Thisapproach would in fact also work if G was weighted, if we additionally add the edges from G ofweight ≥ d to H C . The reason we need these heavy edges is that a path π ( S, t ) in G might havea large weight edge ( u, v ) on the path (with edge weight (cid:29) d ) and H C would not guarantee thatthere is even a path in H C from u to v . But instead the ES-tree could directly pick such a largeedge from G and include it on its path.There are two main obstacles to the above approach. The primary obstacle is that H C is fully-dynamic and not decremental because new cores can be inserted. Intuitively, however, the insertionsin H C have low impact because H C models the decremental graph G . In an earlier paper, Forster,Henzinger, and Nanongkai [HKN14a] showed how to extend an ES tree to work in graphs withlow-impact insertions; their technique is called a monotone ES-tree (MES). We note that the MEStree is not a black-box technique: it is a general framework which has to be individually adaptedto every particular graph. Most of this section is thus dedicated to proving that the MES treeworks on our emulator with low-impact insertions; while this proof is quite technical, conceptuallyit follows the same framework as other MES proofs (see e.g. [HKN14a, BC16, GWN20]).The second obstacle is that the argument above incurs an additive error of d , so it only guar-antees a good approximation when dist( S, t ) > d/(cid:15) . For smaller distances, we run ApxBall on asmaller distance scale, which is the source of the additional T ApxBall ( G, S, str (cid:15) ) k d, (cid:15) ) term in thetheorem statement. In the final section of this part (Section II.6), we use an inductive argument toargue that T ApxBall ( G, S, str (cid:15) ) k d, (cid:15) ) is small, and so the running time of ApxBall ( G, S, D, (cid:15) )is in fact dominated by the first term ˜ O ( | ball G ( S, D ) | ∆ D(cid:15)d ). II.5.1 Emulator
Recall the covering-compressed graph H C = ( V ∪C , E ) of the covering C defined in Definition II.2.8. As C is explicitly maintained for us, we will assume that H C is explicitly maintained for us as wellby Remark II.2.9. Definition II.5.2 (Emulator e H ) . Given a decremental graph G = ( V, E ) , a decremental set ofvertices S ⊆ V , depth parameters d ≤ D and approximation parameter / polylog( n ) ≤ (cid:15) < , anda covering-compressed graph H C = ( V ∪ C , E ) of G of the covering C . e define the (static) vertex set V init = ball G (0) ( S (0) , D ) . We can define the emulator e H withweight function e w where its edge set ˜ E = E ( e H ) consists of the following1. the edges e that are incident to V init in the graph H C .2. the edges e ∈ E ( G [ V init ]) where d < w G ( e ) ≤ D , and we set e w ( e ) = d w G ( e ) e (cid:15)d , and3. we maintain B S = ApxBall ( G, S, str (cid:15) ) k d, (cid:15) ) and for each vertex v ∈ ( B S ∩ V init ) , we havean edge ( s, v ) between a universal dummy vertex s and v of weight e w ( s, v ) = l e d near ( v ) m (cid:15)d where e d near ( v ) denotes the distance estimate maintained by ApxBall ( G, S, str (cid:15) ) k d, (cid:15) ) .The vertex set of e H , denoted e V = V ( e H ) , is the union of V init and the set of all endpoints of ˜ E . Here, a more explicit way of defining the vertex set of e H is to consider the cores in C thatsome vertex of V init in their shell (at any point), formally the collection C refined = { C ∈ C | shell ( C init ) ∩ V init = ∅} . Then, e V can be defined as the union of V init ∪ { s } ∪ C refined . Notethat as C is a fully-dynamic set, so is C refined and therefore e V . However, since we are inducing overedges, we only add or remove vertices of degree zero.We henceforth call the vertices in V init , the regular vertices. We call the vertices in C refined ,the core vertices. Proposition II.5.3.
We have the following:1. Regular vertices in e H have all-time degree at most ∆ + O (1) .2. Core vertices in e H form an independent set.Proof. (1): Each regular vertex u is ever incident to at most ∆ core vertices by Definition II.2.6. As G has bounded degree and is decremental, u is ever incident to at most O (1) other regular vertices.Also, S is decremental and u might be incident to s only once. In total, the all-time degree of u is∆ + O (1).(2): As the covering-compressed graph H C is bipartite, core vertices are independent in H C . Aswe never add edges between core vertices in e H , they are independent in e H as well.For each edge e ∈ E ( e H ), we let e w ( e ) denote the weight of e in e H . If ( u, v ) / ∈ E ( e H ), we let e w ( u, v ) ← ∞ . In particular, deleting an edge e in e H is to increase the weight e w ( e ) to infinity. Proposition II.5.4.
For every edge e ∈ E ( e H ) , we have the following:1. e w ( e ) is a non-negative multiple of d (cid:15)d e .2. e w ( e ) = 0 if and only if e = ( s, v ) where v ∈ S .3. e w ( e ) can only increase after e is inserted into e H .Proof. (1,2): This follows directly from the construction of e H .(3): We insert edges into e H only when there is a new core C added into the covering C (recallthat edges in G do not undergo edge weight changes by the Proposition II.1.2). For each edge e =( v, C ) ∈ E ( H C ) where v ∈ shell ( C ), we have that w ( e ) = l str · d ‘ core ( C ) + e d C ( v ) m (cid:15)d where e d C ( v )is the distance estimate of dist G ( C, v ) from the instance of
ApxBall that maintains shell ( C ).By the guarantee of ApxBall , e d C ( v ) never decreases and hence w ( e ) never decreases.Let E ALL ( e H ) be the set of all edges ever appear in e H . Lemma II.5.5. | E ALL ( e H ) | ≤ O ( | ball G ( S, D ) | ∆) . Moreover, the total number of edge updates(including insertions, deletions, and weight increase) in e H is at most O ( | ball G ( S, D ) | ∆ D(cid:15)d ) . roof. The bound on | E ALL ( e H ) | follows directly from Proposition II.5.3. For each edge, its weightcan be updated at most d D/(cid:15)d e times because (1): every edge weight e is a multiple of (cid:15)d byProposition II.5.4(1), (2): e w ( e ) may only increase after e was inserted by Proposition II.5.4(3), and(3): any edge with weight more than D is removed from e H . Therefore, the total number of edgeupdates is | E ALL ( e H ) | · d D/(cid:15)d e = O ( | ball G ( S, D ) | ∆ D(cid:15)d ). II.5.2 The Algorithm: MES on the Emulator
Our
ApxBall algorithm for Theorem II.5.1 works as follows.1. Maintain the emulator e H with a dummy source s . Let e d near ( u ) be the distance estimate of u maintained by ApxBall ( G, S, str (cid:15) ) k d, (cid:15) ) as described in Definition II.5.2(3).2. Maintain the Monotone Even-Shiloach (MES) data structure
MES ( e H, s, D ) (see Algorithm 5)which maintains the distance estimates { e d ( u ) } u ∈ e V . After each edge deletion to G , there canbe several edge updates to e H . We feed all edge insertions to the MES data structure beforeany other update generated at this time.3. For each regular vertex u ∈ V ∩ e V , we maintain n min { e d ( u ) , e d near ( u ) } o u ∈ V ∩ e V as the distanceestimates for our ApxBall data structure.
Algorithm 5:
MES ( e H, s, D ) Procedure
Init ( e H ) foreach u ∈ e V do e d ( u ) ← dist e H ( s, u ). Procedure
WeightIncrease ( e H, ( u, v )) UpdateLevel ( u ) and UpdateLevel ( v ). Procedure
UpdateLevel ( u ) if min v { e d ( v ) + e w ( v, u ) } > e d ( u ) then e d ( u ) ← min v { e d ( v ) + e w ( v, u ) } . if e d ( u ) > D then e d ( u ) ← ∞ . UpdateLevel ( v ) for all neighbors v of u .For every vertex u ∈ e V \ { s } , we let arg min v { e d ( v ) + e w ( v, u ) } be u ’s parent . The set of edgesbetween parents and children form a tree e T rooted at s is called the MES tree . In the analysisbelow, we do not need not the tree e T itself. However, the tree e T will be used later for our datastructure that can report a path in Section III.3. II.5.3 Analysis of MES
In this section, we analyze the running time of Algorithm 5 and the accuracy of the estimates { e d ( u ) } u maintained by the MES data structure. Although the analysis is quite technical, it followsthe same template as shown by previous works that employ the MES data structure (e.g. [HKN14a,HKNS15, BC16, BC17, Ber17, GWN20]). We note that, if there is no insertion, the described algorithm is equivalent to the classic ES-tree algorithm [ES81] otal Update Time Using the standard analysis of the classic ES tree, we can bound the total update time.
Lemma II.5.6.
The total update time of
MES ( e H, s, D ) is ˜ O ( | ball G ( S, D ) | ∆ D(cid:15)d ) .Proof. The initialization takes ˜ O ( | E ( e H ) | ) = ˜ O ( | ball G ( S, D ) | ∆) by running time Dijkstra’s algo-rithm. Each vertex u ∈ e V maintains min v { e d ( v ) + e w ( v, u ) } using heaps.The algorithm calls UpdateLevel because of the direct edge updates to e H at most U = O ( | ball G ( S, D ) | ∆ D(cid:15)d ) time by Lemma II.5.5. Each call to
UpdateLevel takes only O (1) time forchecking the condition. Otherwise, if the algorithm spends more time, then an estimate e d ( u ) mustincrease. Once e d ( u ) is increased, when we spend additional O (deg e H ( u ) log n ) time to update theheaps, and invoke UpdateLevel deg e H ( u ) more times. We charge the cost for updating theseheaps and the cost for checking the condition in each call to UpdateLevel to the increase of e d ( u ).This charging scheme works because e d ( u ) can be increased at most O ( D/(cid:15)d ) times. Indeed, e d ( u )is a multiple of (cid:15)d by Proposition II.5.4(1) and we set e d ( u ) ← ∞ whenever e d ( u ) > D .Therefore, the algorithms calls UpdateLevel at most U + O ( | E ( e H ) | log n · D(cid:15)d ) = ˜ O ( | ball G ( S, D ) | ∆ D(cid:15)d )times, and the additional time spent when estimates are increased is at most O ( | E ( e H ) | log n · D(cid:15)d ) =˜ O ( | ball G ( S, D ) | ∆ D(cid:15)d ) time. This concludes the claim.
Dynamics of Distance Estimates
In this section, we show basic properties of the distance estimates { e d ( u ) } u ∈ e V maintained by theMES data structure. The analysis is genetic and so we hope that it might be useful for future useof the MES data structure. We only need that, at each time, all insertions to e H are handled beforeother updates. The notion of stretched vertex will be useful here and for proving the accuracy ofthe estimates later. Definition II.5.7 (Stretched Vertices) . For any u ∈ e V \ { s } , we say that u is stretched if e d ( u ) > min v { e d ( v ) + e w ( v, u ) } . If u is stretched , every edge ( v, u ) where e d ( u ) > e d ( v ) + e w ( v, u ) is stretched . Each edge deletion in G generates several updates to e H . We use the phrase “after time t ” torefer to the time when the algorithm finishes processing the t -th edge deletion to G and all otherupdates to e H generated by that deletion. Let e d t ( u ) denote the distance estimate e d ( u ) after time t .Similarly, let e w t ( e ) denote the weight e w ( e ) after time t .The intuition of Lemma II.5.8 below is that, the estimates of non-stretched vertices “behave”like distances, i.e. e d t ( u ) = min v { e d t ( v ) + e w t ( v, u ) } . For stretched vertices, although this is not true,their estimates do not increase which will be helpful for proving that we never overestimate thedistances. Lemma II.5.8.
For each vertex u ∈ e V \ { s } , we have the following:1. e d ( u ) = min v { e d ( v ) + e w ( v, u ) } .2. e d ( u ) only increases through time.3. e d t ( u ) ≥ min v { e d t ( v ) + e w t ( v, u ) } .4. If u is not stretched after time t , then e d t ( u ) = min v { e d t ( v ) + e w t ( v, u ) } .5. If u is stretched after time t and min v { e d t ( v ) + e w t ( v, u ) } ≤ D , then e d t ( u ) = e d t − ( u ) . roof. (1): At the initialization, we set e d ( u ) = dist e H ( s, u ) for all u ∈ e V . As dist e H ( s, u ) =min v { dist e H ( s, v ) + e w ( v, u ) } , so e d ( u ) = min v { e d ( v ) + e w ( v, u ) } after time 0.(2): e d ( u ) is updated only through UpdateLevel , which only increases e d ( u ).(3): We say that u is loose if e d ( u ) < min v { e d ( v ) + e w ( v, u ) } . Initially, no vertex is loose by (1).At any moment, u has a chance of being loose only if, for some neighbor v of u , e d ( v ) or e w ( v, u ) isincreased. If this event happens, then UpdateLevel ( u ) is called by Lines 4 and 9 of Algorithm 5.If u is indeed loose, then we set e d ( u ) ← min v { e d ( v ) + e w ( v, u ) } which makes u not loose. Therefore,no vertex is loose after time t , which implies the claim.(4): We have e d t ( u ) ≤ min v { e d t ( v ) + e w t ( v, u ) } as u is not stretched after time t . By combiningwith (3), we are done.(5): Let (¯ v, u ) be the stretched edge after time t , i.e. e d t ( u ) > e d t (¯ v ) + e w t ( v , u ). Suppose forcontradiction e d ( u ) increases when the t -th edge deletion is processed. Consider the last call to UpdateLevel ( u ) that e d ( u ) is increased. Let e d ( · ) and e w ( · ) denote e d ( · ) and e w ( · ) at the momentwhen the algorithm increases e d ( u ), i.e. when we set e d ( u ) = min v { e d ( v ) + e w ( v, u ) } .Note that e d (¯ v ) ≤ e d t (¯ v ) by (2). Also, e w (¯ v, u ) ≤ e w t (¯ v, u ) because, for each time t , the al-gorithm processes all insertions to e H before any other updates to e H and hence before any callto UpdateLevel . The remaining updates to e H may only increase the weight e w ( e ) by Proposi-tion II.5.4(3). So e d (¯ v ) + e w (¯ v, u ) ≤ e d t ( v ) + w t ( v, u ) ≤ D . Hence, e d ( u ) ≤ D and e d ( u ) not setto ∞ . So we have e d ( u ) ≤ e d t ( v ) + e w t ( v, u ). As this last moment e d ( u ) is increased when the t -thupdate is processed, we have e d t ( u ) = e d ( u ) ≤ e d t ( v ) + e w t ( v, u ), which contradicts the fact that ( v , u )is stretched. Lower Bounds of Estimates
In this section, we show that the estimates { e d ( u ) } u ∈ e V are lower bounded by distances in G . Wewill prove by induction. The proposition below handles the base case. Proposition II.5.9.
For any t , e d t ( u ) = 0 if and only if u ∈ S ( t ) .Proof. By Proposition II.5.4(2), we have u ∈ S (0) iff e d ( u ) = dist e H ( s, u ) = 0. Note that S is adecremental set. As long as u ∈ S , e d ( u ) never increases otherwise 0 = e d ( s ) + e w ( s, u ) > e d ( u ) at somepoint of time, which is impossible as e d ( u ) never decreases by Lemma II.5.8(2). Whenever u leaves S (i.e. ( s, u ) is deleted from e H ), then UpdateLevel ( u ) is called. As all edges incident of e H to u have positive weight, e d ( u ) will be increased and e d ( u ) > u simply by applying induction hypoth-esis on the parent of u in the MES tree. We need to lower bound the estimate of core vertices aswell (although we do not need them at the end) so that the induction hypothesis is strong enough. Lemma II.5.10.
For each vertex u ∈ e V \ { s } , after time t , we have the following:1. If u is a core vertex corresponding to a core C , then e d t ( u ) ≥ dist G ( t ) ( S ( t ) , C ( t ) ) .2. If u is a regular vertex, then e d t ( u ) ≥ dist G ( t ) ( S ( t ) , u ) .Proof. We prove by induction on e d t ( u ). The base case where e d t ( u ) = 0 is done by Proposition II.5.9.It remains to consider u ∈ e V \ ( S ( t ) ∪ { s } ) where e d t ( u ) < ∞ . Let v p = arg min v { e d ( v ) + e w ( v, u ) } be the parent of u . We have e d t ( u ) ≥ e d t ( v p ) + e w t ( v p , u ) by Lemma II.5.8(3). As e w t ( v p , u ) > e d t ( v p ) by induction hypothesis.45here are two main cases. If u is a core vertex, then v p is a regular vertex by Proposition II.5.3(2)and since the dummy source s is not incident to core vertices. So we have e d t ( u ) ≥ e d t ( v p ) + e w t ( v p , u ) ≥ dist G ( t ) ( S ( t ) , v p ) + l str · d ‘ core ( C ) + dist G ( C ( t ) , v p ) m (cid:15)d ≥ dist G ( t ) ( S ( t ) , C ( t ) )where the second inequality is by induction hypothesis and by the edge weight of the covering-compressed graph assigned in Definition II.2.8.Now, suppose that u is a regular vertex. We have three more sub-cases because v p can eitherbe a core vertex, a regular vertex, or a dummy source vertex s . If v p is a core vertex correspondingto a core C p , then e d t ( u ) ≥ e d t ( v p ) + e w t ( v p , u ) ≥ dist G ( t ) ( S ( t ) , C ( t ) p ) + l str · d ‘ core ( C p ) + dist G ( C ( t ) p , u ) m (cid:15)d ≥ dist G ( t ) ( S ( t ) , C ( t ) p ) + diam G ( t ) ( C ( t ) p ) + dist G ( C ( t ) p , u ) ≥ dist G ( t ) ( S ( t ) , u ) . where the second inequality follows by the same reason as in the previous case, and diam G ( t ) ( C ( t ) p ) ≤ str · d ‘ core ( C p ) is guaranteed by Definition II.2.6. If v p is a regular vertex, then we have e d t ( u ) ≥ e d t ( v p ) + e w t ( v p , u ) ≥ dist G ( t ) ( S ( t ) , v p ) + d w ( v p , u ) e (cid:15)d ≥ dist G ( S ( t ) , u )where second inequality is by induction hypothesis and e w t ( v p , u ) = d w ( v p , u ) e (cid:15)d by constructionof e H . Lastly, if v p = s , then e d t ( u ) ≥ e d t ( s ) + e w t ( s, u ) = l e d near t ( u ) m (cid:15)d ≥ dist G ( S ( t ) , u ) because e d near t ( u ) ≥ dist G ( S ( t ) , u ) by the guarantee of ApxBall ( G, S, str (cid:15) ) k d, (cid:15) ). Upper Bounds of Estimates
In this section, we show that the estimates { e d ( u ) } u ∈ e V are upper bounded by distances in G withinsmall approximation factor. This section highly exploits the structure of e H described in Defini-tion II.5.2. Lemma II.5.11.
For each vertex u ∈ e V \ { s } , after time t , we have the following:1. If u is a regular vertex where dist G ( t ) ( S ( t ) , u ) ≤ D , then e d t ( u ) ≤ min ( g dist G ( t ) ( S ( t ) , u ) , min ( v,u ) ∈ E ( G ( t ) ) ∩ E ( e H ( t ) ) n g dist G ( t ) ( S ( t ) , v ) + e w t ( v, u ) o) (II.2) where we define g dist G ( t ) ( S ( t ) , v ) = max { l (1 + (cid:15) )dist G ( t ) ( S ( t ) , v ) m (cid:15)d , (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , v ) } .2. If u is a core vertex corresponding to a core C where ( str (cid:15) ) k d < dist G ( t ) ( S ( t ) , C ( t ) ) ≤ D , then e d t ( u ) ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , C ( t ) ) − · d ‘ core ( C ) . (II.3) Proof.
For any time t and any u ∈ e V , we define d t ( u ) = ( dist G ( t ) ( S ( t ) , u ) if u is a regular vertexdist G ( t ) ( S ( t ) , shell ( C ( t ) )) if u is a core vertex corresponds to a core C d t -order refer to an increasing order of vertices in e V according to d t ( u ). If d t ( u ) = d t ( v ) forsome regular vertex u and some core vertex v , we let u precede v in this order. We will prove theclaim by induction on t and then on the d t -order of vertices in e V . Our strategy is to first bound min v { e d t ( v ) + e w t ( v, u ) } instead of e d t ( u ). More formally, we willshow that for regular vertices u where dist G ( t ) ( S ( t ) , u ) ≤ D ,min v { e d t ( v ) + e w t ( v, u ) } ≤ min ( g dist G ( t ) ( S ( t ) , u ) , min ( v,u ) ∈ E ( G ( t ) ) ∩ E ( e H ( t ) ) n g dist G ( t ) ( S ( t ) , v ) + e w t ( v, u ) o) (II.4)and for core vertices u corresponding to a core C where ( str (cid:15) ) k d < dist G ( t ) ( S ( t ) , C ( t ) ) ≤ D ,min v { e d t ( v ) + e w t ( v, u ) } ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , C ( t ) ) − · d ‘ core ( C ) (II.5)Note that, to prove Equation (II.4) and Equation (II.5), we still assume that induction hypothesisholds for e d t ( u ). Then, we will use Equation (II.4) and Equation (II.5) to prove Equation (II.2) andEquation (II.3), respectively. Proving Equation (II.4) for Regular Vertices.
For any t ≥
0, we first show that min v { e d t ( v )+ e w t ( v, u ) } ≤ g dist G ( t ) ( S ( t ) , u ). If dist G ( t ) ( S ( t ) , u ) ≤ str (cid:15) ) k d , then ( s, u ) ∈ E ( e H ) and somin v { e d t ( v ) + e w t ( v, u ) } ≤ e d t ( s ) + e w t ( s, u )= 0 + l e d near t ( u ) m (cid:15)d by construction of e H ≤ l (1 + (cid:15) )dist G ( t ) ( S ( t ) , u ) m (cid:15)d by ApxBall ( G, S,
2( str (cid:15) ) k d, (cid:15) ) ≤ g dist G ( t ) ( S ( t ) , u ) . So from now, we assume that dist G ( t ) ( S ( t ) , u ) > str (cid:15) ) k d . The covering guarantees that there existsa level- ‘ core C where u ∈ cover ( C ) for some ‘ ∈ [0 , k ). Let v C ∈ e V denote the core vertexcorresponding to C . Consider an S ( t ) - u shortest path P = ( v , . . . , v z ) in G where v ∈ S ( t ) and u = v z . There are two sub-cases whether v z − ∈ shell ( C ) or not.1. Suppose that v z − ∈ shell ( C ). Then, we can apply induction hypothesis on v C because d t ( v C ) = dist G ( t ) ( S ( t ) , shell ( C ( t ) )) < dist G ( t ) ( S ( t ) , v z ) = d t ( u ) . and dist G ( t ) ( S ( t ) , C ( t ) ) ≥ dist G ( t ) ( S ( t ) , u ) − dist G ( t ) ( C ( t ) , u ) >
2( str (cid:15) ) k d − d ‘ (1 + (cid:15) ) > ( str (cid:15) ) k d. where dist G ( t ) ( C ( t ) , u ) ≤ d ‘ (1 + (cid:15) ) because u ∈ cover ( C ) and d ‘ ≤ ( str (cid:15) ) k − d . So, af-ter applying induction hypothesis, we have e d t ( v C ) ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , C ( t ) ) − · d ‘ . By the definition of the covering-compressed graph from Definition II.2.8, we have e w t ( v C , u ) = l str · d ‘ + e d Ct ( u ) m (cid:15)d where e d Ct ( u ) ≤ (1 + (cid:15) )dist G ( t ) ( C ( t ) , u ) is maintained by47 pxBall ( G, C, str4 (cid:15) d ‘ , (cid:15) ) that maintains shell ( C ). We conclude by thatmin v { e d t ( v ) + e w t ( v, u ) }≤ e d t ( v C ) + e w t ( v C , u ) ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , C ( t ) ) − · d ‘ + l str · d ‘ + (1 + (cid:15) )dist G ( t ) ( C ( t ) , u ) m (cid:15)d ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , C ( t ) ) + (1 + (cid:15) )dist G ( t ) ( C ( t ) , u ) + (str · d ‘ + (cid:15)d − · d ‘ ) ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , u ) ≤ g dist G ( t ) ( S ( t ) , u ) .
2. Suppose that v z − / ∈ shell ( C ). We have e d t ( v z − ) ≤ g dist G ( t ) ( S ( t ) , v z − ) by induction hypothe-sis. Also, note that w t ( v z − , v z ) ≥ dist G ( t ) ( S ( t ) , v z − ) − dist G ( t ) ( S ( t ) , v z ) > str4 (cid:15) d ‘ − d ‘ · (1+ (cid:15) ) >d because v z − / ∈ shell ( C ) but v z ∈ cover ( C ). So, by construction of e H , we have( v z − , v z ) ∈ E ( G ) ∩ E ( e H ) with weight e w t ( v z − , v z ) = d w t ( v z − , v z ) e (cid:15)d ≤ w t ( v z − , v z ) + (cid:15)d < (1 + (cid:15) ) w t ( v z − , v z ) . We conclude by Lemma II.5.8(4) thatmin v { e d t ( v ) + e w t ( v, u ) } ≤ e d t ( v z − ) + e w t ( v z − , v z ) ≤ g dist G ( t ) ( S ( t ) , v z − ) + (1 + (cid:15) ) w t ( v z − , v z ) ≤ max n (1 + (cid:15) )dist G ( t ) ( S ( t ) , u ) + (cid:15)d, (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , u ) o = (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , u ) = g dist G ( t ) ( S ( t ) , u ) . where the last line is because dist G ( t ) ( S ( t ) , u ) > str (cid:15) ) k d .In both cases, we have min v { e d t ( v ) + e w t ( v, u ) } ≤ g dist G ( t ) ( S ( t ) , u ) as desired.Lastly, we also need to show thatmin v { e d t ( v ) + e w t ( v, u ) } ≤ min ( v,u ) ∈ E ( G ( t ) ) ∩ E ( e H ( t ) ) n g dist G ( t ) ( S ( t ) , v ) + e w t ( v, u ) o . Consider any ( v, u ) ∈ E ( G ( t ) ) ∩ E ( e H ( t ) ). If dist G ( t ) ( S ( t ) , v ) ≥ dist G ( t ) ( S ( t ) , u ), then, trivially, wehave that min v { e d t ( v ) + e w t ( v, u ) } ≤ g dist G ( t ) ( S ( t ) , u ) ≤ g dist G ( t ) ( S ( t ) , v ) + e w t ( v, u ). Otherwise, ifdist G ( t ) ( S ( t ) , v ) < dist G ( t ) ( S ( t ) , u ), then, by applying induction hypothesis on v , we again havemin v { e d t ( v ) + e w t ( v, u ) } ≤ g dist G ( t ) ( S ( t ) , v ) + e w t ( v, u ). Proving Equation (II.5) for Core Vertices.
Suppose u is a core vertex corresponding to alevel- ‘ core C ∈ C . As shell ( C ) = ApxBall ( G, C, str4 (cid:15) d ‘ , (cid:15) ) and dist G ( t ) ( S ( t ) , C ( t ) ) > ( str (cid:15) ) k d > str4 (cid:15) d ‘ · (1 + (cid:15) ) for all ‘ < k , we have shell ( C ( t ) ) ∩ S ( t ) = ∅ . Consider the S ( t ) - C ( t ) shortest path P = ( v , . . . , v z ) in G where v ∈ S ( t ) \ shell ( C ( t ) ) to v z ∈ C ( t ) . Let i be the first index that v i ∈ shell ( C ( t ) ). Note that i >
1. Note that we can apply induction hypothesis on v i because d t ( v i ) = dist G ( t ) ( S ( t ) , v i ) = dist G ( t ) ( S ( t ) , shell ( C ( t ) )) = d t ( u ) and because v i is a regular vertexand u is a core vertex. There are two cases:1. If w t ( v i − , v i ) < str10 (cid:15) d ‘ , then we havedist G ( t ) ( v i , C ( t ) ) ≥ dist G ( t ) ( C ( t ) , v i − ) − w t ( v i − , v i ) > str4 (cid:15) d ‘ − str10 (cid:15) d ‘ > str8 (cid:15) d ‘ as v i − / ∈ shell ( C ( t ) ) .
48e concludemin v { e d t ( v ) + e w t ( v, u ) }≤ e d t ( v i ) + e w t ( v i , u ) ≤ g dist G ( t ) ( S ( t ) , v i ) + l str · d ‘ + (1 + (cid:15) )dist G ( t ) ( v i , C ( t ) ) m (cid:15)d by IH on v i ≤ l (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , v i ) m (cid:15)d + l str · d ‘ + (1 + (cid:15) )dist G ( t ) ( v i , C ( t ) ) m (cid:15)d by definition of g dist ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , v i ) + (1 + 50 (cid:15) )dist G ( t ) ( v i , C ( t ) ) − (cid:15) dist G ( t ) ( v i , C ( t ) ) + str · d ‘ + 2 (cid:15)d< (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , C ( t ) ) − (cid:15) · str8 (cid:15) · d ‘ + str · d ‘ + 2 (cid:15)d ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , C ( t ) ) − · d ‘
2. If w t ( v i − , v i ) ≥ str10 (cid:15) d ‘ , then ( v i − , v i ) ∈ E ( G ( t ) ) ∩ E ( e H ( t ) ) and e w t ( v i − , v i ) = d w t ( v i − , v i ) e (cid:15)d .We have e d t ( v i ) ≤ g dist G ( t ) ( S ( t ) , v i − ) + e w t ( v i − , v i ) by IH on v i ≤ l (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , v i − ) m (cid:15)d + d w t ( v i − , v i ) e (cid:15)d by definition of g dist ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , v i − ) + (1 + 50 (cid:15) ) w t ( v i − , v i ) − (cid:15) · w t ( v i − , v i ) + 2 (cid:15)d ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , v i ) − · d ‘ + 2 (cid:15)d We concludemin v { e d t ( v ) + e w t ( v, u ) }≤ e d t ( v i ) + e w t ( v i , u ) ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , v i ) − · d ‘ + 2 (cid:15)d + l str · d ‘ + (1 + (cid:15) )dist G ( t ) ( v i , C ( t ) ) m (cid:15)d ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , C ( t ) ) − · d ‘ + 2 (cid:15)d + str · d ‘ + (cid:15)d ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , C ( t ) ) − · d ‘ . where the second inequality is by the definition of covering-compressed graph from Defini-tion II.2.8 which says that e w t ( v i , u ) = l str · d ‘ + e d Ct ( v i ) m (cid:15)d where e d Ct ( v i ) ≤ (1+ (cid:15) )dist G ( t ) ( C ( t ) , v i )is maintained by ApxBall ( G, C, str4 (cid:15) d ‘ , (cid:15) ).In both cases, we have shown that min v { e d t ( v ) + e w t ( v, u ) } ≤ (1 + 50 (cid:15) )dist G ( t ) ( S ( t ) , C ( t ) ) − · d ‘ as desired. Bounding e d t ( u ) . If u is not a stretched vertex after time t , then by Lemma II.5.8(Item 4) e d t ( u ) ≤ min v { e d t ( v ) + e w t ( v, u ) } . Therefore, Equation (II.2) and Equation (II.3) follow immediatelyfrom Equation (II.4) and Equation (II.5), respectively.Now, suppose u is a stretched vertex after time t . Equation (II.4) and Equation (II.5) imply thatmin v { e d t ( v ) + e w t ( v, u ) } ≤ D because we assume dist G ( t ) ( S ( t ) , u ) ≤ D if u is a regular vertex anddist G ( t ) ( S ( t ) , C ( t ) ) ≤ D if u is a core vertex. So by Lemma II.5.8(Item 5), we have e d t ( u ) = e d t − ( u ).So, to prove that Equation (II.4) and Equation (II.5) hold at time t , it is enough to prove that theright hand side of both Equation (II.4) and Equation (II.5) do not decrease from time t − t . This49s true for Equation (II.4) because, for every edge e ∈ E ( G ) ∩ E ( e H ), e w ( e ) = d w ( e ) e (cid:15)d and the edgeweight w ( e ) in G never decrease. Also, G and S are decremental and dist G ( S, u ) never decreases,which in turn means that g dist G ( S, u ) never decreases. This is true for Equation (II.5) because C isalso a decremental set, and so dist G ( S, C ) never decreases. This completes the proof.
II.5.4 Proof of Theorem II.5.1
Finally, we conclude the proof of Theorem II.5.1 by showing that the all requirements of
ApxBall ( G, S, D, (cid:15) ) are satisfied and then analyzing the running time of the algorithm. Correctness.
Recall that n min { e d near ( u ) , e d ( u ) } o u ∈ e V are the estimates maintained by the algo-rithm. For any vertex u ∈ V \ e V , we implicitly set e d ( u ) ← ∞ and do not spend any time maintainingit. This is justified because e V contains ball G (0) ( S (0) , D ) and ball G ( S, D ) is a decremental set, so u / ∈ ball G ( t ) ( S ( t ) , D ) for any t .Now, for each u ∈ V ∩ e V , by the guarantee of ApxBall ( G, S, str (cid:15) ) k d, (cid:15) ) and Lemma II.5.10,we first have that min { e d near ( u ) , e d ( u ) } ≥ dist G ( t ) ( S ( t ) , u ). Secondly, if u ≤ str (cid:15) ) k d , then e d near ( u ) ≤ (1 + (cid:15) )dist G ( S, u ) by
ApxBall ( G, S, str (cid:15) ) k d, (cid:15) ). Otherwise, if 2( str (cid:15) ) k d < dist G ( S, u ) ≤ D , then e d ( u ) ≤ g dist( S, u ) = (1 + 50 (cid:15) )dist G ( S, u ) by Lemma II.5.11. So we satisfy Property 2 of Def-inition II.2.1. Thirdly, both e d near ( u ) and e d ( u ) never decrease by Lemma II.5.8(2). Therefore, n min { e d near ( u ) , e d ( u ) } o u ∈ V satisfies all three conditions of Definition II.2.1. Total Update Time.
As the underlying graph G and the covering-compressed graph H C areexplicitly maintained for us, we can maintain the emulator e H in time U + T ApxBall ( G, S, str (cid:15) ) k d, (cid:15) )where U denote the total number of edge updates to e H . By Lemma II.5.5, U = O ( | ball G ( S, D ) | ∆ D(cid:15)d ).Next, the total update time for maintaining the MES data structure is ˜ O ( | ball G ( S, D ) | ∆ D(cid:15)d ) byLemma II.5.6. Therefore, in total, our algorithm takes ˜ O ( | ball G ( S, D ) | ∆ D(cid:15)d )+ T ApxBall ( G, S, str (cid:15) ) k d, (cid:15) )time. II.6 Putting Distance-Only Components Together
In this section, we show how all our data structures fit together. The main data structures were
ApxBall (Definition II.2.1),
RobustCore (Definition II.2.5), and Covering(Definition II.2.6).
Theorem II.6.1.
For any n and (cid:15) ∈ ( φ cmg , . , let G = ( V, E ) be a decremental bounded-degreegraph with n vertices and edge weights are from { , , . . . , W = n } . Let S ⊆ V be any decrementalset. We can implement ApxBall ( G, S, (cid:15) ) that has b O ( n ) total update time. There are distance scales D ≤ D ≤ · · · ≤ D ds where D i = ( nW ) i/ ds and ds = c ds lg lg lg n forsome small constant c ds >
0. We will implement our data structures for ds many levels. Recallthat φ cmg = 1 / Θ(log / ) = b Ω(1). For 0 ≤ i ≤ ds , we set k i = (lg lg n ) i γ i = 1 /φ k i +1 cmg and γ − = 1 (cid:15) i = (cid:15)/ ds − i str i = γ i − · log c str n/φ cmg ∆ i = Θ( k i n /k i /φ cmg )50here we let c str be a large constant to be determined later. The parameters are defined in suchthat way that n / ds , D i D i − , n /k i , γ i , /(cid:15) i , str i , ∆ i = b O (1)for all 0 ≤ i ≤ ds . To exploit these parameters, we need more fine-grained properties which aresummarized below: Proposition II.6.2.
For large enough n and for all ≤ i ≤ ds , we have that1. lg lg n ≤ k i ≤ (lg / n ) ,2. φ cmg ≤ (cid:15) i ≤ (cid:15) γ i = 2 O (lg / / n ) ,4. D i /D i − ≤ n / ds ,5. γ i ≤ D i /D i − , and6. γ i ≥ ( γ i − · φ cmg ) k i ≥ ( str i (cid:15) i ) k i .Proof. (1): We have k i = (lg lg n ) i ≤ (lg lg n ) ds ≤ (lg lg n ) (lg lg n ) / ≤ lg / n as ds = c ds lg lg lg n and c ds is a small enough constant.(2): It is clear that (cid:15) i ≤ (cid:15) . For the other direction, note that in the assumption of Theorem II.6.1,we have (cid:15) ≥ φ cmg . So (cid:15) i ≥ (cid:15)/ ds ≥ φ cmg / Θ(lg lg lg n ) ≥ φ cmg because φ cmg = 1 / Θ(lg / n ) .(3): As φ cmg = 2 Θ(lg / n ) and γ i = 1 /φ k i +1 cmg , we have from property Item 1 of this propositionthat γ i = 2 O (lg / / n ) .(4): We have D i /D i − = ( nW ) / ds . Since W = n , we have D i /D i − ≤ n / ds .(5): As D i /D i − ≥ n / ds ≥ Θ(lg n/ lg lg lg n ) , by (3) we have γ i ≤ ( D i /D i − ) when n is largeenough.(8): We have γ i ≥ ( γ i − · φ cmg ) k i because γ i = ( 1 φ cmg ) k i +1 ≥ ( 1 φ cmg ) ( k i + k i ) = (( 1 φ cmg ) k i · φ cmg ) k i = ( γ i − · φ cmg ) k i where the inequality holds is because k i +1 = (lg lg n ) i +1 = (lg lg n ) i · × (lg lg n ) i ≥ (lg lg n ) i · +(lg lg n ) i = k i + k i for all i ≥
0. For the second inequality, we havestr i (cid:15) i = γ i − (log c str n ) /φ cmg (cid:15) i ≤ γ i − φ cmg because (log c str n ) ≤ /φ cmg and (cid:15) i ≥ φ cmg by (2). Therefore, γ i ≥ ( γ i − · φ cmg ) k i ≥ ( str i (cid:15) i ) k i .Before we prove our main Lemma by induction, we recall the Figure from the beginning of thesection to provide the reader with a high-level overview of how components are connected. Lemma II.6.3.
For every ≤ i ≤ ds , we can maintain the following data structures:1. ApxBall ( G, S, d , (cid:15) i ) for any d ≤ d i (cid:44) D i +1 log( n ) /(cid:15) i using total update time of ˜ O ( (cid:12)(cid:12) ball G ( S, d ) (cid:12)(cid:12) n /k +12 / ds φ cmg (cid:15) ) = b O ( (cid:12)(cid:12) ball G ( S, d ) (cid:12)(cid:12) ) . RobustCore ( G, K init , d ) for any d ≤ D i +1 using total update time of (cid:12)(cid:12)(cid:12) ball G ( K init , d log n ) (cid:12)(cid:12)(cid:12) poly( n /k +1 / ds φ cmg (cid:15) ) = b O ( (cid:12)(cid:12)(cid:12) ball G ( K init , d log n ) (cid:12)(cid:12)(cid:12) ) with scattering parameter δ scatter = ˜Ω( φ cmg ) and stretch at most str i .3. ( D i , k i , (cid:15) i , str i , ∆ i ) -covering using total update time of ˜ O ( n · poly( n /k / ds φ cmg (cid:15) )) = b O ( n ) .For all i > , we assume by induction that a ( D i − , k i − , (cid:15) i − , str i − , ∆ i − ) -covering of G is alreadyexplicitly maintained.Proof. (1): We prove by induction on i that T ApxBall ( G, S, d , (cid:15) i ) ≤ | ball G ( S, d ) | · ( i + 1) · n /k / ds φ cmg (cid:15) · (log n ) c for any d ≤ d i where c is some large enough constant. For i = 0, we have by Proposi-tion II.2.3 that T ApxBall ( G, S, d , (cid:15) i ) ≤ O ( (cid:12)(cid:12) ball G ( S, d ) (cid:12)(cid:12) d ) ≤ (cid:12)(cid:12) ball G ( S, d ) (cid:12)(cid:12) · O ( D log( n ) /(cid:15) ) ≤ (cid:12)(cid:12) ball G ( S, d ) (cid:12)(cid:12) · ( i + 1) · n /k +12 / ds φ cmg (cid:15) · (log n ) c . For i >
0, we assume d > d i − (cid:44) D i log( n ) /(cid:15) i − otherwise we are done by induction hypoth-esis. As ( D i − , k i − , (cid:15) i − , str i − , ∆ i − )-covering is already explicitly maintained by the inductionhypothesis, by Theorem II.5.1, we can maintain ApxBall ( G, S, d , (cid:15) i ) where (cid:15) i = 50 (cid:15) i − using totalupdate time of 52 ApxBall ( G, S, d , (cid:15) i ) ≤ ˜ O ( (cid:12)(cid:12) ball G ( S, d ) (cid:12)(cid:12) ∆ i − (32 D i +1 log( n ) /(cid:15) i ) (cid:15) i − D i − )+ T ApxBall
G, S, (cid:18) str i − (cid:15) i − (cid:19) k i − D i − , (cid:15) i − ! . We will show that 2( str i − (cid:15) i − ) k i − D i − ≤ d i − so that we can apply induction hypothesis on T ApxBall ( G, S, str i − (cid:15) i − ) k i − D i − , (cid:15) i − ). To see this, note that D i ≥ γ i D i − ≥ ( str i (cid:15) i ) k i D i − by Propo-sition II.6.2(5,6). So d i − =
32 log n(cid:15) i − D i ≥
32 log n(cid:15) i − · ( str i (cid:15) i ) k i D i − ≥ str i − (cid:15) i − ) k i − D i − where the lastinequality is because k i ≥ k i − and str i (cid:15) i ≥ str i − (cid:15) i − (because str i str i − ≥
50 = (cid:15) i (cid:15) i − ). Therefore, byProposition II.6.2(4), the bound on T ApxBall ( G, S, d , (cid:15) i ) is at most (cid:12)(cid:12) ball G ( S, d ) (cid:12)(cid:12) n /k i − +12 / ds φ cmg (cid:15) i − · (log n ) c + T ApxBall ( G, S, d i − , (cid:15) i − ) ≤ (cid:12)(cid:12) ball G ( S, d ) (cid:12)(cid:12) n /k i − +12 / ds φ cmg (cid:15) · (log n ) c + i (cid:12)(cid:12) ball G ( S, d i − ) (cid:12)(cid:12) · n /k +12 / ds φ cmg (cid:15) · (log n ) c by IH ≤ ( i + 1) (cid:12)(cid:12) ball G ( S, d ) (cid:12)(cid:12) n /k +12 / ds φ cmg (cid:15) · (log n ) c as d i − < d which completes the inductive step.(2): For i = 0, we have that a (1 , , O (1))-compressed graph of G can be trivially maintainedby Proposition II.2.12. By Theorem II.3.1, we can implement RobustCore ( G, K init , d ) withscattering parameter δ scatter = ˜Ω( φ cmg ) and stretch at most ˜ O (1 /φ cmg ) ≤ str (by definition ofstr ) with total update time˜ O (cid:16) T ApxBall ( G, K init , d log n, . D ) /φ cmg (cid:17) = ˜ O (cid:16) | ball G ( K init , d log n ) | D /φ cmg (cid:17) by Proposition II.2.3.For i >
0, given that a ( D i − , k i − , (cid:15) i − , str i − , ∆ i − )-covering is explicitly maintained, byProposition II.2.11, we can automatically maintain a ( D i − , γ i − , ∆ i − )-compressed graph where γ i − ≥ (str i − /(cid:15) i − ) k i − by Proposition II.6.2(6).By Theorem II.3.1, we can maintain RobustCore ( G, K init , d ) with δ scatter = ˜Ω( φ cmg ) and˜ O ( γ i − /φ cmg ) ≤ str i (by definition of str i ) with total update time˜ O (cid:16) T ApxBall ( G, K init , d log n, . i − ( D i +1 /D i − ) /φ cmg (cid:17) = (cid:12)(cid:12)(cid:12) ball G ( K init , d log n ) (cid:12)(cid:12)(cid:12) poly( n /k +1 / ds φ cmg (cid:15) )by (1).(3): Recall that the algorithm from Theorem II.4.1 for maintaining a ( D i , k i , (cid:15) i , str i , ∆ i )-coveringof G assumes, for all D i ≤ d ≤ D i ( str i (cid:15) i ) k , RobustCore and
ApxBall data structures with inputdistance parameter d . By (1) and (2), we can indeed implement these data structures for anydistance parameter d ≤ D i +1 . Since D i ( str i (cid:15) i ) k i ≤ D i γ i ≤ D i +1 by Proposition II.6.2(5,6), theassumption is satisfied.So, using Theorem II.4.1, we can maintain a ( D i , k i , (cid:15) i , str i , ∆ i )-covering of G with ∆ i =Θ( k i n /k i /δ scatter ) in total update time of O ( k i n /k i log n/δ scatter + X C ∈C ALL T RobustCore ( G ( t C ) , C ( t C ) , d ‘ core ( C ) )+ T ApxBall ( G ( t C ) , C ( t C ) , str i (cid:15) i d ‘ core ( C ) , (cid:15) i ))53here C ALL contains all cores that have ever been initialized and, for each C ∈ C ALL , t C is the time C is initialized. By plugging in the total update time of ApxBall from (1) and
RobustCore from (2), the total update time for maintaining the covering is˜ O ( n /k i δ scatter + X C ∈C ALL (cid:12)(cid:12)(cid:12) ball G ( tC ) ( C ( t C ) , d ‘ core ( C ) log n ) (cid:12)(cid:12)(cid:12) poly( n /k +1 / ds φ cmg (cid:15) )+ (cid:12)(cid:12)(cid:12)(cid:12) ball G ( tC ) ( C ( t C ) , str i (cid:15) i d ‘ core ( C ) ) (cid:12)(cid:12)(cid:12)(cid:12) n /k +12 / ds φ cmg (cid:15) ) . As it is guaranteed by Theorem II.4.1 that X C ∈C ALL | ball G ( tC ) ( C ( t C ) , str i (cid:15) i d ‘ core ( C ) ) | ≤ O ( k i n /k i /δ scatter ) , the above expression simplifies to ˜ O ( n · poly( n /k / ds φ cmg (cid:15) )).By constructing all the data structures from level i = 0 to ds , we can conclude Theorem II.6.1.54 art III: Path-reporting Dynamic ShortestPaths In this part of the paper, we augment the decremental
SSSP data structure from the previouspart to support threshold-subpath queries , which returns a subset of edges in a path. To preciselydescribe the properties of queries, we introduce the notion steadiness . Steadiness and simpleness.
All graphs in this part can be described as follows. A graph G =( V, E, w, σ ) is such that, each edge e has weight w ( e ) and has integral steadiness σ ( e ) ∈ [ σ min , σ max ].We call σ min and σ max the minimum and maximum steadiness of G , respectively. For any multi -set E ⊆ E and j , we let σ ≤ j ( E ) = { e ∈ E | σ ( e ) ≤ j } contain all edges from E of steadiness atmost j . We let σ ≤ j ( G ) = G [ σ ≤ j ( E )] denote the subgraph of G induced by the edge set σ ≤ j ( E ).We define σ ≥ j ( E ) , σ >j ( E ) , σ < ( E ) and σ ≥ j ( G ) , σ >j ( G ) , σ Now, we are ready to define the augmented versionof the decremental SSSP data structure from (Definition II.1.1) that supports threshold-subpathqueries . The outputs of threshold-subpath queries are always of the form σ ≤ j ( P ). We remind thereader of our application of these queries, where we sample "sensitive" edges (i.e. those that arealmost filled by the flow we want to send along the path) more often than "steady" edges. Let usnow state a definition and the main result of this section. Definition III.0.1. A path-reporting decremental SSSP data structure SSSP π ( G, s, (cid:15), β, q ) is adecremental SSSP data structure SSSP ( G, s, (cid:15) ) with the following additional guarantee: • For each vertex v , v is associated with a β -edge-simple s - v path π ( s, v ) in G of length at most (1 + (cid:15) )dist G ( s, v ) . We say that π ( u, v ) is implicitly maintained by SSSP π ( G, s, (cid:15), β, q ) . • Given any vertex v and a steadiness index j , the data structure returns σ ≤ j ( π ( s, v )) in ( | σ ≤ j ( π ( s, v )) | + 1) · q worst-case time. (We emphasize that π ( s, v ) is independent from j .) Theorem III.0.2. Given an undirected decremental graph G = ( V, E, w, σ ) with n vertices and m initial edges that have weights from { , , . . . , W } and steadiness from { , . . . , σ max } where σ max = o (log / n ) , a fixed source vertex s ∈ V , and any (cid:15) > φ cmg , we can implement SSSP π ( G, s, (cid:15), β, q ) in b O ( m log W ) total update time such that the edge-simpleness parameter is β = b O (1) and query-timeoverhead is q = b O (1) . II.1 Preliminaries on Path-reporting Data Structures Let P = ( u, . . . , v ) and Q = ( v, . . . , w ) be paths that share an endpoint at a vertex v . We let P ◦ Q or sometimes ( P, Q ) denote the concatenation of P and Q . The union σ ≤ j ( P ) ∪ σ ≤ j ( Q ) is alwaysa multi-set union of σ ≤ j ( P ) and σ ≤ j ( Q ).We will use the following simplifying reduction which allows us to assume that out input graphthroughout this part has bounded degree and satisfies other convenient properties. The proof isshown in Appendix A.3.1. Proposition III.1.1. Suppose that there is a data structure SSSP π ( H, s, (cid:15), β, q ) that only worksif H satisfies the following properties: • H always stays connected. • Each update to H is an edge deletion (not an increase in edge weight). • H has maximum degree . • H has edge weights in [1 , n H ] and edges steadiness [0 , σ max + 1] .Suppose SSSP π ( H, s, (cid:15), β, q ) has T SSSP π ( m H , n H , (cid:15) ) total update time where m H and n H are num-bers of initial edges and vertices of H . Then, we can implement SSSP π ( G, s, O ( (cid:15) ) , O ( β log( W n )) , O ( q )) where G is an arbitrary decremental graph with m initial edges that have weights in [1 , W ] andsteadiness in [0 , σ max ] using total update time of ˜ O (cid:0) m/(cid:15) + T SSSP π ( O ( m log W ) , m, (cid:15) ) (cid:1) · log( W ) . III.2 Main Path-Reporting Components Below, we describe our main path-reporting data structures. They are all natural extensions ofthe data structures listed in Section II.2 so that they can support threshold-subpath queries, whichreturn edges with small steadiness in a path. Definition III.2.1. A path-reporting approximate ball data structure ApxBall π ( G, S, d, (cid:15), β ) isan approximate ball data structure ApxBall ( G, S, d, (cid:15) ) with the following additional guarantee: • For each vertex v ∈ ball G ( S, d ) , v is associated with a β -simple S - v path π ( S, v ) in G of lengthat most (1 + (cid:15) )dist G ( S, v ) . We say that π ( S, v ) is implicitly maintained by ApxBall π ( G, S, d, (cid:15), κ ) . • Given any vertex v ∈ ball G ( S, d ) and a steadiness index j , the data structure returns themulti-set σ ≤ j ( π ( S, v )) . (We emphasize that π ( S, v ) is independent from j .) Similar to Definition II.2.1, we slightly abuse the notation and denote ApxBall π ( G, S, d, (cid:15), β ) = { v | e d ( v ) ≤ (1 + (cid:15) ) d } as the set of all vertices v whose distance estimate e d ( v ) is at most (1 + (cid:15) ) d . Definition III.2.2. A path-reporting robust core data structure RobustCore π ( G, K init , d, β ) with a stretch parameter str is a robust core data structure RobustCore ( G, K init , d ) with thefollowing additional guarantee: • For each pair of vertices ( u, v ) ∈ K × K where K ⊆ K init is the maintained core set, thepair ( u, v ) is associated with a β -simple u - v path π ( u, v ) of length at most str · d . We say that π ( u, v ) is implicitly maintained by RobustCore π ( G, K init , d ) . • Given a pair ( u, v ) ∈ K × K and a steadiness index j , the algorithm returns σ ≤ j ( π ( u, v )) .(We emphasize that π ( u, v ) is independent from j .) We note that path-reporting RobustCore π is indeed stronger than distance-only RobustCore .56 emark III.2.3 . While the distance-only RobustCore with stretch str only guarantees that diam G ( K ) ≤ str · d , the path-reporting RobustCore π is stronger as it implicitly maintains paths π ( u, v ) for all u, v ∈ K of length at most str · d which certifies that diam G ( K ) ≤ str · d . Definition III.2.4. A path-reporting ( d, k, (cid:15), str , ∆ , β )-covering C of a decremental graph G is a ( d, k, (cid:15), str , ∆) -covering C such that the distance-only ApxBall and RobustCore are replaced bythe path-reporting ApxBall π and RobustCore π , respectively. More precisely, for each level- ‘ core C ∈ C , we have C = RobustCore π ( G, C init , d ‘ , β ) with stretch at most str , cover ( C ) = ApxBall π ( G, C, d ‘ , (cid:15), β ) and shell ( C ) = ApxBall ( G, C, str4 (cid:15) d ‘ , (cid:15), β ) . Next, we define a path-reporting version of compressed graphs. Recall from Definition II.2.10that a compressed graph is formally a hypergraph. Definition III.2.5. A path-reporting ( d, γ, ∆ , β )-compressed graph H of a decremental graph G is a ( d, γ, ∆) -compressed graph H and there is the following data structure: • For each adjacent pair of vertices ( u, v ) in H , the pair ( u, v ) is associated with a β -simple u - v path π ( u, v ) of length at most γ · d . We say that π ( u, v ) is implicitly maintained by H . • Given an adjacent pair ( u, v ) and a steadiness index j , the algorithm returns σ ≤ j ( π ( u, v )) .(We emphasize that π ( u, v ) is independent from j .) All four path-reporting components above are defined in such a way that they are as strong astheir distance-only counterparts. This will be very important because it allows us to replace alldistance-only components in algorithms by their path-reporting counterparts without violating anyguarantees. The lemma below makes this point precise. Lemma III.2.6. We have the following:1. ApxBall π ( G, S, d, (cid:15), β ) satisfies all requirement of ApxBall ( G, S, d, (cid:15) ) .2. RobustCore π ( G, K init , d, β ) with stretch str satisfies all requirement of RobustCore ( G, K init , d ) with stretch str .3. A path-reporting ( d, k, (cid:15), str , ∆ , β ) -covering C of G is a (distance-only) ( d, k, (cid:15), str , ∆) -coveringof G .4. A path-reporting ( d, γ, ∆ , β )-compressed graph H of G is a ( d, γ, ∆) -compressed graph of G .Proof. For (1), this follows from definitions. For (2), this follows from definitions and Remark III.2.3.For (3), this follows by (1) and (2) because as path-reporting coverings are the same as distance-onlyones except that ApxBall and RobustCore are replaced by ApxBall π and RobustCore π ,respectively. For (4), this follows from definitions.The query time of threshold-subpath queries are measured as follows: Definition III.2.7 (Query-time Overhead) . We say a path-reporting data structure has ( q φ , q path )query-time overhead if, given any query with steadiness index j and P is the path that should bereturned if j = ∞ , then σ ≤ j ( P ) is returned in at most q φ time if σ ≤ j ( P ) = ∅ and in at most | σ ≤ j ( P ) | · q path time otherwise. For path-reporting covering C , we say that C has ( q φ , q path ) query-time overhead, if all ApxBall π and RobustCore π that are invoked for maintaining C havequery-time overhead at most ( q φ , q path ) . By replacing distance-only ApxBall and RobustCore in Theorem II.4.1 for maintaining adistance-only covering in with path-reporting ApxBall π and RobustCore π which are strongerby Lemma III.2.6, we immediately obtain the following theorem analogous to Theorem II.4.1.57 heorem III.2.8. Let G be an n -vertex bounded-degree decremental graph. Given parameters ( d, k, (cid:15), str) where (cid:15) ≤ . , we assume the following: • for all d ≤ d ≤ d ( str (cid:15) ) k − , there is RobustCore π ( G, K init , d , β ) with scattering parameterat least δ scatter and stretch at most str that has total update time T RobustCore π ( G, K init , d , β ) ,and • for all d ≤ d ≤ d ( str (cid:15) ) k , there is ApxBall π ( G, S, d , (cid:15), β ) with total update time of T ApxBall π ( G, S, d , (cid:15), β ) .Then, we can maintain a path-reporting ( d, k, (cid:15), str , ∆ , β ) -covering of G with ∆ = O ( kn /k /δ scatter ) in total update time O ( kn /k log( n ) /δ scatter + X C ∈C ALL T RobustCore π ( G ( t C ) , C ( t C ) , d ‘ core ( C ) , β )+ T ApxBall π ( G ( t C ) , C ( t C ) , str4 (cid:15) d ‘ core ( C ) , (cid:15), β )) where C ALL contains all cores that have ever been initialized and, for each C ∈ C ALL , t C is the time C is initialized. We guarantee that P C ∈C ALL | ball G ( tC ) ( C ( t C ) , str4 (cid:15) d ‘ core ( C ) ) | ≤ O ( kn /k /δ scatter ) . The following is analogous to Proposition II.2.11. Proposition III.2.9 (A Covering-Compressed Graph is a Compressed Graph (Path-reportingversion)) . Let C be a path-reporting ( d, k, (cid:15), str , ∆ , β ) -covering of a graph G . Let H C be the covering-compressed graph of C and H be the hypergraph view of H C . Then H is a path-reporting ( d, γ, ∆ , β ) -compressed graph of G where γ = (str /(cid:15) ) k . If the query-time overhead of C is ( q φ , q path ) , then H has query-time overhead of (3 q φ , q path + 2 q φ ) .Proof. Proposition II.2.11 already implies H is a (distance-only) ( d, γ, ∆)-compressed graph. Itremains to define a 3 β -simple u - v path π ( u, v ) of length at most γ · d for every vertices u and v adjacent in H , and then show a data structure that, given ( u, v ) and a steadiness index j , returns σ ≤ j ( π ( u, v )).Consider vertices u and v adjacent in H via a hyperedge e . There is a level- ‘ core C ∈ C ,for some ‘ ∈ [0 , k − e such that u, v ∈ shell ( C ). Recall that d ‘ = d · ( str (cid:15) ) ‘ from Definition II.2.6. We have ApxBall π ( G, C, str4 (cid:15) d ‘ , (cid:15), β ) = shell ( C ) implicitlymaintains β -simple paths π ( u, C ) = ( u, . . . , u C ) and π ( C, v ) = ( v C , . . . , v ) of length at most (1 + (cid:15) ) str4 (cid:15) d ‘ where u C , v C ∈ C . Also, RobustCore π ( G, C init , d ‘ , β ) = C implicitly maintains a β -simplepath π ( u C , v C ) of length at most str · d ‘ . We define π ( u, v ) = π ( u, C ) ◦ π ( u C , v C ) ◦ π ( C, v ). Thispath is clearly 3 β -simple and has length at most 2(1 + (cid:15) ) str4 (cid:15) d ‘ + str · d ‘ ≤ str (cid:15) d k − = dγ , as desired.By Remark II.2.9, given the covering C , we will assume that the correspondences betweeneach hyperedge e ∈ E ( H ) and the corresponding core C ∈ C is always maintained for us. Given( u, v ) and a steadiness index j , we can straight-forwardly query ApxBall π ( G, C, str4 (cid:15) d ‘ , (cid:15), β ) and RobustCore π ( G, C init , d ‘ , β ) to obtain σ ≤ j ( π ( u, v )) = σ ≤ j ( π ( u, C )) ∪ σ ≤ j ( π ( u C , v C )) ∪ σ ≤ j ( π ( C, v )) . If σ ≤ j ( π ( u, v )) = ∅ , this takes at most 3 q φ time. Otherwise, this takes at most | σ ≤ j ( π ( u, v )) |· q path +2 q φ ≤ | σ ≤ j ( π ( u, v )) | · ( q path + 2 q φ ) time.Next, we note that Proposition II.2.12 generalizes to its path-reporting version immediately. Proposition III.2.10 (A Trivial Path-reporting Compressed Graph) . Let G be a bounded-degreegraph G with integer edge weights. Let G unit be obtained G by removing all edges with weight greaterthan one. Then, G unit is a path-reporting ( d = 1 , γ = 1 , ∆ = O (1) , β = 1) -compressed graph of G with query-time overhead of ( q φ = 1 , q path = 1) . Proposition III.2.11 (Path-reporting ES-tree) . We can implement ApxBall π ( G, S, d, (cid:15) = 0 , β =1) in O ( | ball G ( S, d ) | · d log n ) total update time with ( O (log n ) , O ( d log n )) query-time overhead.Proof. This can be done by explicitly maintaining an ES-tree T rooted at S to up distance d in O ( | ball G ( S, d ) | · d ) total update time. For every v ∈ ball G ( S, d ), we define π ( S, v ) as the simple S - v path in T which has length exactly dist G ( S, v ). We also implement a link-cut tree [ST83] ontop of the ES-tree T so that, given any v ∈ ball G ( S, d ), we can obtain the minimum steadiness j v of edges in π ( S, v ) in time O (log n ). Maintaining the link-cut tree only increases the total updatetime by a factor O (log n ). Given v ∈ ball G ( S, d ) and a steadiness index j , we check j < j min . If j < j min , then we know σ ≤ j ( π ( S, v )) = ∅ and we return ∅ in O (log n ) time. If j ≥ j min , then weknow σ ≤ j ( π ( S, v )) = ∅ and so we just explicitly list all edges in π ( S, v ) which contains at most d edges and return σ ≤ j ( π ( S, v )) in O ( d + log n ) = O ( | σ ≤ j ( π ( S, v )) | d log n ) time. III.3 Implementing Path-reporting Approximate Balls In this section, we show how to implement path-reporting approximate ball data structures ApxBall π for distance scale D . We will assume that a path-reporting covering C for distance scale d (cid:28) D is given for us. Then, the algorithm exploits three more components as a subroutine: (1) path-reporting ApxBall π for smaller distance scale ≈ d , similar to how it is done for the distance-onlyversion, (2) path-reporting ApxBall π for distance scale D but the smaller graph G peel , and (3)distance-only ApxBall for distance scale D on G peel but with good accuracy guarantee. This iswhy the total update time of the three components appears in Equation (III.1) below. Theorem III.3.1 (Path-reporting Approximate Ball) . Let G be an n -vertex bounded-degree decre-mental graph with steadiness between [ σ min , σ max ] . Let G peel = σ >σ min ( G ) be obtained from G byremoving edges with steadiness σ min . Let (cid:15) ≤ / and (cid:15) peel ≤ . Suppose that a path-reporting ( d, k, (cid:15), str , ∆ , β ) -covering C of G is explicitly maintained for us. Then, we can implement a path-reporting approximate ball data structure ApxBall π ( G, S, D, (cid:15) + (cid:15) peel , β ∆) using total updatetime ˜ O ( | ball G ( S, D ) | ∆ D(cid:15)d ) + T ApxBall π ( G, S, 2( str (cid:15) ) k d, (cid:15), β )+ (III.1) T ApxBall π ( G peel , S, D, (cid:15) peel , β ∆) + T ApxBall ( G peel , S, D, (cid:15) ) . Let ( q φ , q path ) bound the query-time overhead of both ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ) and ( d, k, (cid:15), str , ∆ , β ) -covering C . Let ( q peel φ , q peelpath ) bound the query-time overhead of ApxBall π ( G peel , S, D, (cid:15) peel , β ∆) ,Then, the data structure has query-time overhead of ( q peel φ + O (1) , max { q peelpath + O (1) , q path + O ( D(cid:15)d ) · q φ } ) . The rest of this section is for proving Theorem III.3.1. In Section III.3.1, we describe datastructures for maintaining the distance estimate e d ( v ) for all v ∈ ball G ( S, D ) and for additionallysupporting threshold-subpath queries, and then we analyze the total update time. Based on themaintained data structure, in Section III.3.2, we define the implicitly maintained paths π ( S, v ) forall v ∈ ball G ( S, D ) as required by Definition III.2.1 of ApxBall π . Finally, we show an algorithmthat answers threshold-subpath queries in Section III.3.3. Note that using the link-cut tree, we can in fact list edges σ ≤ j ( π ( S, v )) in | σ ≤ j ( π ( S, v )) | · O (log n ) time so thatthe query-time overhead is ( O (log n ) , O (log n )), but we do not need to optimize this factor. II.3.1 Data Structures Data structures on G . We maintain the distance estimates e d ( v ) and the MES-tree e T using thesame approach as in the distance-only algorithm from Section II.5. The only difference is that wereplace the distance-only components with the path-reporting ones.More specifically, given the path-reporting ( d, k, (cid:15), str , ∆ , β )-covering C , let H C be the covering-compressed graph w.r.t. C (recall Definition II.2.8). Then, we maintain the emulator e H based on H C as described in Definition II.5.2 but we replace the distance-only ApxBall ( G, S, str (cid:15) ) k d, (cid:15) )with the path-reporting ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ) in Item 3 of Definition II.5.2. For each v ∈ ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ), ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ) maintains the distance estimate e d near ( v ) and implicitly maintains an (1 + (cid:15) ) approximate S - v shortest path π near ( S, v ).Now, given the emulator e H with a dummy source s , we use exactly the same algorithm MES ( e H, s, D ) from Algorithm 5 to maintain the MES-tree e T on e H , and let e d MES ( v ) denote thedistance estimate of v maintained by MES ( e H, s, D ). Recall that e T is defined as follows: for ev-ery vertex u ∈ V ( e H ) \ { s } , u ’s parent in e T is arg min v { e d MES ( v ) + e w ( v, u ) } . Then, we maintain e d ( v ) = min { e d near ( v ) , e d MES ( v ) } for each v ∈ V ( e H ). Note that, we used slightly different notations inSection II.5; we said that the algorithm maintains min { e d near ( v ) , e d ( v ) } for each v , but in Section II.5 e d ( v ) was used to denote e d MES ( v ). So the outputs from both sections are equivalent objects.We observe that our slight modification does not change the accuracy guarantee of the distanceestimates. Lemma III.3.2. For v ∈ ball G ( S, D ) , dist G ( S, v ) ≤ e d ( v ) ≤ (1 + 50 (cid:15) )dist G ( S, v ) .Proof. The only changes in the algorithm from Section II.5 are to replace the distance-only ( d, k, (cid:15), str , ∆)-covering C with the path-reporting ( d, k, (cid:15), str , ∆ , β )-covering C , and to replace the distance-only ApxBall ( G, S, str (cid:15) ) k d, (cid:15) ) with the path-reporting ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ). As shown inLemma III.2.6, these path reporting data structures are stronger than their distance-only counter-parts. Therefore, all the arguments in Section II.5 for proving the accuracy of e d ( v ) still hold. Data structures on G peel . Next, let G peel = σ >σ min ( G ) be obtained from G by removing edgeswith steadiness σ min . We recursively maintain the distance-only ApxBall ( G peel , S, D, (cid:15) ) and let e d peel ( v ) denote its distance estimate for the shortest S - v path in G peel . We also recursively maintainthe path-reporting ApxBall π ( G peel , S, D, (cid:15) peel , β ∆) and let π peel ( S, v ) denote its implicitly main-tained approximate S - v shortest path in G peel . We emphasize that the approximation guaranteeon e d peel ( v ) depends on (cid:15) and not on (cid:15) peel .This completes the description of the all data structures for Theorem III.3.1. We bound thetotal update time as specified in Theorem III.3.1 below. Lemma III.3.3. The total update time is ˜ O ( | ball G ( S, D ) | ∆ D(cid:15)d ) + T ApxBall π ( G, S, 2( str (cid:15) ) k d, (cid:15), β )+ T ApxBall π ( G peel , S, D, (cid:15) peel , β ∆) + T ApxBall ( G peel , S, D, (cid:15) ) . Proof. As the covering C is explicitly maintained for us, we do not count its update time. Using theexactly same analysis as in the last paragraph of Section II.5.4, the total update time for maintaining { e d ( v ) } v is ˜ O ( | ball G ( S, D ) | ∆ D(cid:15)d ) + T ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ). Note that we replace T ApxBall ( · )with T ApxBall π ( · ). Lastly, the data structures on G peel take T ApxBall π ( G peel , S, D, (cid:15) peel , β ∆) + T ApxBall ( G peel , S, D, (cid:15) ) time by definition. 60 II.3.2 Defining The Implicitly Maintained Paths In this section, for each v ∈ ball G ( S, D ), we define an approximate S - v shortest path π ( S, v ) usingAlgorithm 6. More precisely, we let π ( S, v ) be defined as the path that would be returned if we runAlgorithm 6 at the current stage (the algorithm is deterministic, so the query always returns thesame path on a fixed input). We explicitly emphasize that these paths π ( S, v ) are not maintainedexplicitly, but they are unique and fixed through the stage and they are completely independentfrom the steadiness index j in the queries. Algorithm 6: Computing π ( S, v ) for each v ∈ ball G ( S, D ). if e d peel ( v ) ≤ (1 + 50 (cid:15) ) · e d ( v ) then return π peel ( S, v ) implicitly maintained by ApxBall π ( G peel , S, D, (cid:15) peel , β ∆). if ( s, v ) ∈ E ( e H ) then return π near ( S, v ) implicitly maintained by ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ). Let e P v = ( s = u , u . . . , u z = v ) be the unique s - v path in the MES tree e T . foreach e ∈ e P v do if e = ( s, u ) then π e ← π near ( S, u ) implicitly maintained by ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ). if e ∈ E ( G ) then π e ← { e } . if e ∈ E ( H C ) where e = ( u, u ) , u corresponds to a core C and u is a regular vertex then π e ← π ( C, u ) implicitly maintained by ApxBall π ( G, C, str4 (cid:15) d ‘ core ( C ) , (cid:15), β ) = shell ( C ) in the covering C . foreach u i ∈ e P v where u i corresponds to a core C do Let u i , u i ∈ C be such that π ( u i − ,u i ) = ( u i − , . . . , u i ) and π ( u i ,u i +1 ) = ( u i , . . . , u i +1 ). π u i ← π ( u i , u i ) implicitly maintained by RobustCore π ( G, C init , d ‘ core ( C ) , β ) = C inthe covering C . Order the paths from Line 6 as π ( u ,u ) , π ( u ,u ) , . . . , π ( u z − ,u z ) and then, for each path π u i from Line 13, insert π u i between π ( u i − ,u i ) and π ( u i ,u i +1 ) . return π ( S, v ) as the concatenation of all these ordered paths.Below, we show that each path π ( S, v ) defined by Algorithm 6 satisfies the requirement fromDefinition III.2.1: it is an approximate S - v shortest path in G (Lemma III.3.4) and it guaranteesbounded simpleness (Lemma III.3.5). Lemma III.3.4. For every v ∈ ball G ( S, D ) , we have the following:1. If e d peel ( v ) ≤ (1 + 50 (cid:15) ) · e d ( v ) , then π ( S, v ) is a (1 + 300 (cid:15) + (cid:15) peel ) -approximate S - v shortestpath in G .2. If e d peel ( v ) > (1 + 50 (cid:15) ) · e d ( v ) , then π ( S, v ) is a (1 + 50 (cid:15) ) -approximate S - v shortest path in G Proof. π ( S, v ) is indeed an S - v path in G because the subpaths of π ( S, v ) are ordered and concate-nated at Line 15 such that their endpoints meet, and one endpoint of π ( S, v ) is v and another is in S . Below, we only need to bound the total weight w ( π ( S, v )) of the path π ( S, v ).61f e d peel ( v ) ≤ (1 + 50 (cid:15) ) e d ( v ), then π ( S, v ) ← π peel ( S, v ) is assigned at Line 2. Therefore, we have w ( π ( S, v )) ≤ (1 + (cid:15) peel )dist G peel ( S, v ) by ApxBall π ( G peel , S, D, (cid:15) peel , β ∆) ≤ (1 + (cid:15) peel ) e d peel ( v ) by ApxBall ( G peel , S, D, (cid:15) ) ≤ (1 + (cid:15) peel )(1 + 50 (cid:15) ) e d ( v ) by Line 1 ≤ (1 + (cid:15) peel )(1 + 50 (cid:15) ) dist G ( S, v ) by Lemma III.3.2 ≤ (1 + 300 (cid:15) + (cid:15) peel )dist G ( S, v ) . Next, if e d peel ( v ) > (1 + 50 (cid:15) ) e d ( v ), then we have two cases. Suppose π ( S, v ) ← π near ( S, v ) isassigned at Line 3. Then, w ( π ( S, v )) = w ( π near ( S, v )) ≤ (1 + (cid:15) )dist G ( S, v ) by the guarantee of ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ). Otherwise, π ( S, v ) must be assigned at Line 15. Recall that e w ( e )denotes the weight of e in the emulator e H . It suffices to show that w ( π ( S, v )) ≤ (1 + (cid:15) ) · P e ∈ e P v e w ( e )and P e ∈ e P v e w ( e ) ≤ (1 + 50 (cid:15) )dist G ( S, v ) because they imply that w ( π ( S, v )) ≤ (1 + 50 (cid:15) ) dist G ( S, v ).Below, we prove each inequality one by one.To prove w ( π ( S, v )) ≤ (1+ (cid:15) ) · P e ∈ e P v e w ( e ), observe that π ( S, v ) is a concatenation of subpaths ofthe following three types: (1) π e where e ∈ E ( G ), (2) π ( u i − ,u i ) ◦ π u i ◦ π ( u i ,u i +1 ) where u i correspondsto a core C and ( u i − , u i ) , ( u i , u i +1 ) ∈ E ( H C ), and (3) π ( s,u ) where s is the dummy source s . Fora type-1 subpath, we have that w ( π e ) = w ( e ) ≤ d w ( e ) e (cid:15)d = e w ( e ) by Definition II.5.2 of e H . For atype-2 subpath, we have w ( π ( u i − ,u i ) ◦ π u i ◦ π ( u i ,u i +1 ) ) ≤ (1 + (cid:15) )dist G ( u i − , C ) + str · d ‘ core ( C ) + (1 + (cid:15) )dist G ( C, u i +1 ) ≤ (1 + (cid:15) ) · ( e w ( u i − , u i ) + e w ( u i , u i +1 ))where the first inequality is by the guarantee of ApxBall π and RobustCore π with stretch strthat maintain shell ( C ) and C , respectively, and the second inequality follows from weight assign-ment of edges in the covering-compressed graph H C , see Definition II.2.8. For a type-3 subpath, ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ) guarantees that w ( π ( s,u ) ) ≤ (1 + (cid:15) )dist G ( S, u ) ≤ (1 + (cid:15) ) d d near ( S, u ) e (cid:15)d = (1 + (cid:15) ) · e w ( s, u )where the equality is by Definition II.5.2 of e H . Observe that each term in P e ∈ e P v e w ( e ) is chargedonly once by each subpath of π ( S, v ). Therefore, we indeed have w ( π ( S, v )) ≤ (1 + (cid:15) ) · P e ∈ e P v e w ( e ).To prove P e ∈ e P v e w ( e ) ≤ (1+50 (cid:15) )dist G ( S, v ), observe that P e ∈ e P v e w ( e ) ≤ e d MES ( v ) by Lemma II.5.10(1).On the other hand, Lemma II.5.11(1) says that e d MES ( v ) ≤ max {d (1 + (cid:15) )dist G ( S, v ) e (cid:15)d , (1+50 (cid:15) )dist G ( S, v ) } =(1 + 50 (cid:15) )dist G ( S, v ). The equality is because dist G ( S, v ) ≥ str (cid:15) ) k d ≥ d , which holds because( s, v ) / ∈ E ( e H ), i.e. v / ∈ ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ). Lemma III.3.5. For every v ∈ ball G ( S, d ) , the path π ( S, v ) is (8 β ∆) -simple.Proof. First, note that if we set π ( S, v ) = π peel ( S, v ) at Line 2 or π ( S, v ) = π near ( S, v ) at Line 3, then π ( S, v ) is (8 β ∆)-simple by the definition of ApxBall π ( G peel , S, D, (cid:15) peel , β ∆) and ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ).Now, suppose that π ( S, v ) is assigned at Line 15. We claim two things. First, each subpath thatwas concatenated into π ( S, v ) is a β -simple path. Second, every vertex u can participate in at most8∆ such subpaths of π ( S, v ). This would imply that π ( S, v ) is (8 β ∆)-simple as desired.To see the first claim, we consider the four cases of the subpath of π ( S, v ): First, from Line 7,the subpath π ( s,u i ) is β -simple by the definition of ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ). Second, fromLine 9, the subpath π e = { e } where e ∈ e P v ∩ E ( G ) is clearly 1-simple. Third and forth, from62ine 11 and Line 14, the subpaths π e and π u i are β -simple because of the simpleness parameter ofthe covering C To see the second claim, consider any vertex u ∈ V ( G ). Clearly, u can participate in at most 1subpath from Line 7 as π ( s,u i ) is the only path generated from this step. Next, u can participatein at most 2 subpaths from Line 9 because e P v is a simple path in e H and thus u can be in at most2 edges from e P v ∩ E ( G ). The last case counts the subpaths from both Line 11 and Line 14. Forany u i ∈ V ( e H ) corresponding to a core C , if u appears in any path from π ( u i − ,u i ) , π u i , π ( u i ,u i +1 ) ,then we claim u ∈ shell ( C ). But u can be in at most ∆ outer-shells by Definition II.2.6. Hence, u can appear in at most 3∆ subpaths from Line 11 and Line 14. In total, u appears in at most3∆ + 3 ≤ 8∆ subpaths of π ( S, v ). The claim below finishes the proof: Claim III.3.6. If u appears in π ( u i − ,u i ) , π u i or π ( u i ,u i +1 ) , then u ∈ shell ( C ) .Proof. According to Definition II.2.6 and Definition III.2.4, the paths π ( u i − ,u i ) and π ( u i ,u i +1 ) havelength at most (1 + (cid:15) ) · str4 (cid:15) d ‘ core ( C ) , and the path π u i has length at most str · d ‘ core ( C ) . As each ofthese paths has an endpoint in C , so u ∈ ball G ( C, (1 + (cid:15) ) · str4 (cid:15) d ‘ core ( C ) ) ⊆ shell ( C ). To conclude, from Lemma III.3.4 and Lemma III.3.5, for each v ∈ ball G ( S, D ), π ( S, v ) isindeed a (8 β ∆)-simple (1 + 300 (cid:15) + (cid:15) peel )-approximate S - v shortest path in G as required by ApxBall ( G, S, D, (cid:15) + (cid:15) peel , β ∆). III.3.3 Threshold-Subpath Queries In this section, we describe in Algorithm 7 below how to process the threshold-subpath query that,given a vertex v ∈ ball G ( S, D ) and a steadiness index j , returns σ ≤ j ( π ( S, v )) consisting of all edgesof π ( S, v ) with steadiness at most j .We first observe that Algorithm 7 returns the correct answer. This follows straightforwardlybecause all the steps of Algorithm 7 are analogous to the ones in Algorithm 6 except that we justreturn ∅ if we first find that j < σ min . Proposition III.3.7. Given v ∈ ball G ( S, D ) and a steadiness index j , Algorithm 7 returns σ ≤ j ( π ( S, v )) where π ( S, v ) is defined in Algorithm 6.Proof. There are four steps that Algorithm 7 may return. At Line 1, we have σ ≤ j ( π ( S, v )) = ∅ as j < σ min . At Line 3, we have σ ≤ j ( π peel ( S, v )) = σ ≤ j ( π ( S, v )) by Line 2 of Algorithm 6. At Line 5,we have σ ≤ j ( π near ( S, v )) = σ ≤ j ( π ( S, v )) by Line 3 of Algorithm 6. Finally, at Line 16, observethat ans ( v,j ) is simply a multi-set union of all edges of steadiness at most j from all subpaths from π ( S, v ) defined in Algorithm 6. So ans ( v,j ) = σ ≤ j ( π ( S, v )) as well.The following simple observation will help us bound the query time. Proposition III.3.8. If e d peel ( v ) > (1 + 50 (cid:15) ) e d ( v ) and j ≥ σ min , then σ ≤ j ( π ( S, v )) = ∅ . This inclusion is actually the only reason we introduce the notion of outer-shell. If we could argue that u ∈ ball G ( C, str4 (cid:15) d ‘ core ( C ) ), then we would have concluded u ∈ shell ( C ). We do not need shell ( C ) else where. lgorithm 7: Returning σ ≤ j ( π ( S, v )), given v ∈ ball G ( S, D ) and a steadiness index j if j < σ min then return ∅ . ans ( v,j ) ← ∅ . if e d peel ( v ) ≤ (1 + 50 (cid:15) ) · e d ( v ) then return σ ≤ j ( π peel ( S, v )) by querying ApxBall π ( G peel , S, D, (cid:15) peel , β ∆). if ( s, v ) ∈ E ( e H ) then return σ ≤ j ( π near ( S, v )) by querying ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ). Let e P v = ( s = u , u . . . , u z = v ) be the unique s - v path in the MES tree e T . foreach e ∈ e P v do if e = ( s, u ) then ans ( v,j ) ← ans ( v,j ) ∪ σ ≤ j ( π near ( S, u )) by querying ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ). if e ∈ E ( G ) then ans ( v,j ) ← ans ( v,j ) ∪ σ ≤ j ( { e } ). if e ∈ E ( H C ) where e = ( u, u ) and u corresponds to a core C and u is a regular vertex then ans ( v,j ) ← ans ( v,j ) ∪ σ ≤ j ( π ( C, u )) by querying ApxBall π ( G, C, str4 (cid:15) d ‘ core ( C ) , (cid:15), β ). foreach u i ∈ e P v where u i corresponds to a core C do Let u i , u i ∈ C be such that π ( u i − ,u i ) = ( u i − , . . . , u i ) and π ( u i ,u i +1 ) = ( u i , . . . , u i +1 ). ans ( v,j ) ← ans ( v,j ) ∪ σ ≤ j ( π ( u i , u i )) by querying RobustCore π ( G, C init , d ‘ core ( C ) , β ). return ans ( v,j ) Proof. First, observe that dist G peel ( S, v ) > (1 + 50 (cid:15) ) · dist G ( S, v ) becausedist G peel ( S, v ) ≥ (cid:15) ) · e d peel ( v ) by ApxBall ( G peel , S, D, (cid:15) ) > (1 + 50 (cid:15) ) · e d ( v ) by assumption ≥ (1 + 50 (cid:15) ) · dist G ( S, v ) by Lemma III.3.2.This implies that every (1 + 50 (cid:15) ) -approximate S - v shortest path in G must contains some edgewith steadiness σ min . By Lemma III.3.4(2), π ( S, v ) is such a (1 + 50 (cid:15) ) -approximate shortest path.So σ ≤ j ( π ( S, v )) = ∅ as j ≥ σ min .Finally, we bound the query time of the algorithm. Recall that ( q φ , q path ) bounds the query-timeoverhead of both ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ) and ( d, k, (cid:15), str , ∆ , β )-covering C , and ( q peel φ , q peelpath )bounds the query-time overhead of ApxBall π ( G peel , S, D, (cid:15) peel , β ∆). Below, we show that ouralgorithm has ( q peel φ + O (1) , max { q peelpath + O (1) , q path + O ( D(cid:15)d ) · q φ } ) query-time overhead as requiredby Theorem III.3.1. Lemma III.3.9. Given any v ∈ ball G ( S, D ) and j , Algorithm 7 takes q peel φ + O (1) time if σ ≤ j ( π ( S, v )) = ∅ . Otherwise, it takes | σ ≤ j ( π ( S, v )) | · max { q peelpath + O (1) , q path + O ( D(cid:15)d ) · q φ } time.Proof. Suppose that σ ≤ j ( π ( S, v )) = ∅ . Proposition III.3.8 implies that either j < σ min or e d peel ( v ) ≤ (1 + 50 (cid:15) ) e d ( v ). Therefore, Algorithm 7 must return either at Line 1 or Line 3 both of which takesat most q peel φ + O (1) time.Suppose σ ≤ j ( π ( S, v )) = ∅ . If Algorithm 7 returns at Line 3 or Line 5, then the total time is | σ ≤ j ( π ( S, v )) | · max { q peelpath , q path } + O (1). Otherwise, Algorithm 7 returns at Line 16 and so the64lgorithm basically just makes O ( | e P v | ) = O ( D(cid:15)d ) queries to ApxBall π and RobustCore π datastructures maintained inside the covering C , and one query to ApxBall π ( G, S, str (cid:15) ) k d, (cid:15), β ). Thistakes | σ ≤ j ( π ( S, v )) | · q path + O ( D(cid:15)d ) · q φ time. Since | σ ≤ j ( π ( S, v )) | ≥ 1, in any case, the total time isat most | σ ≤ j ( π ( S, v )) | · max { q peelpath + O (1) , q path + O ( D(cid:15)d ) · q φ } . III.4 Implementing Path-reporting Robust Cores In this section, we show how to implement path-reporting robust core data structures RobustCore π for distance scale D . We will assume that a path-reporting compressed-graph H for distance scale d (cid:28) D is given for. Unlike the algorithm for the distance-only RobustCore , here we need to fur-ther assume that H is defined from a path-reporting covering C with small outer-shell participationbound ∆, so that we can bound the simpleness of the maintained paths. Theorem III.4.1 (Path-reporting Robust Core) . Let G be an n -vertex bounded-degree decrementalgraph. Suppose that a path-reporting ( d, γ, ∆ , β ) -compressed graph H of G is explicitly maintainedfor us. Moreover, we assume that either H = G unit as defined in Proposition II.2.12 or H isdefined from a path-reporting covering C with the outer-shell participation bound ∆ via Proposi-tion III.2.9. Assuming that D ≥ dγ , we can implement a path-reporting robust core data structure RobustCore π ( G, K init , D, h apsp ∆ β ) with scattering parameter δ scatter = ˜Ω( φ cmg ) and stretch str = ˜ O ( γh apsp /φ cmg ) and total update time of ˜ O (cid:16) T ApxBall π ( G, K init , str · D, . , β )∆ ( D/d ) h apsp /φ cmg (cid:17) where h apsp = exp(Θ(log / m )) = n o (1) is a parameter that will be used later in Section III.5. Let ( q φ , q path ) bound the query-time overhead of both ApxBall π ( G, S, str · D, . , β ) and the ( d, γ, ∆ , β ) -compressed graph H . Then, the data structure has query-time overhead of (4 q φ , q path + ˜ O ( Dd h apsp /φ cmg ) · q φ ) . The rest of this section is for proving Theorem III.4.1. The organization is analogous to thatof Section III.3. In Section III.4.1, we describe data structures for maintaining the core set K and for supporting threshold-subpath queries, and then we analyze the total update time. InSection III.4.2, we define the implicitly maintained paths π ( u, v ) for all u, v ∈ K as required byDefinition III.2.2 of RobustCore π . Finally, we show an algorithm for answering threshold-subpathqueries in Section III.4.3. III.4.1 Data Structures In this section, we describe data structures needed for the RobustCore π data structure. First,we will need the following extension of the expander pruning algorithm Prune from Lemma II.3.6that is augmented with an all-pair-short-paths oracle on the remaining part of the expander. Lemma III.4.2 (Theorem 3.9 of [CS20]) . There is an algorithm Prune π ( W, φ ) that, given anunweighted decremental multi-graph W = ( V, E ) that is initially a φ -expander with m edges where φ ≥ φ cmg , maintains a decremental set X ⊆ V using O ( mh apsp ) total update time such that W [ X ] is a φ/ -expander at any point of time, and vol W ( V \ X ) ≤ i/φ after i updates. Moreover, givena pair of vertices u, v ∈ X at any time, the algorithms returns a simple u - v path in W [ X ] of lengthat most h apsp in O ( h apsp ) time. This lemma is obtained by setting the parameter q = O (log / m ) in Theorem 3.9 of [CS20]. 65o describe the data structure, we simply replace the distance-only components inside the RobustCore data structure with the path-reporting ones as follows:• Replace the distance-only ( d, γ, ∆)-compressed graph from the assumption of Theorem II.3.1by the path-reporting ( d, γ, ∆ , β )-compressed graph.• Replace Prune ( W multi , φ ) from Line 10 of Algorithm 3 by Prune π ( W multi , φ ) from Lemma III.4.2that support all-pair-short-paths queries.• In addition to maintaining ApxBall ( G, X, D, . 1) from Line 11 of Algorithm 3, we alsomaintain ApxBall π ( G, X, str · D, . , β ).Finally, let B π contain all vertices v whose distance estimate maintained by ApxBall π ( G, X, str · D, . , β ) is at most str10 · D . So, ball G ( X, str10 · D ) ⊆ B π ⊆ ball G ( X, . · str10 · D ). We maintain anedge with minimum steadiness among all edges in G [ B π ] with weight at most 32 D log n , denotedby e min . If there are many edges with minimum steadiness, we break tie arbitrarily but consistentlythrough time (for example, we can fix an arbitrary order of edges and let e min be the first edgesatisfied the condition). This completes the description of the data structure.With the above small modification, the maintained core set K ⊆ K init still guarantees thescattering property. (We prove the stretch property later in Lemma III.4.7.) Lemma III.4.3. Let δ scatter = ˜Ω( φ cmg ) . At any point of time, | ball G ( v, D ) ∩ K init | ≤ (1 − δ scatter ) ·| K init | for all v ∈ K init \ K .Proof. Lemma III.2.6 implies that we can replace the distance-only components in RobustCore with the stronger path-reporting components because the guarantees of the outputs of these path-reporting components never become weaker. Therefore, structural statements including Lemma II.3.8from Section II.3 still hold.The total update time after modification is slightly slower. Compared to the running timeof Theorem II.3.1, we replace a factor of 1 /φ cmg by a factor of h apsp and replace T ApxBall ( · ) by T ApxBall π ( · ). Lemma III.4.4. The total update time is ˜ O (cid:0) T ApxBall π ( G, K init , str · D, . , β )∆ ( D/d ) h apsp /φ cmg (cid:1) .Proof. Note that we assume the path-reporting ( d, γ, ∆ , β )-compressed graph is maintained explic-itly for us and so we do not count its update time. The proof of this lemma is the same as inthe proof of Lemma II.3.12 except that we replace Prune ( W multi , φ cmg ) whose total update timeis ˜ O ( | E ( W multi ) | /φ cmg ) by Prune π ( W multi , φ cmg ) whose total update time is ˜ O ( | E ( W multi ) | h apsp ).Following exactly the same calculation in Lemma II.3.12, the total update time is˜ O (cid:16) T ApxBall ( G, K init , D log n, . ( D/d ) h apsp /φ cmg (cid:17) basically by replacing a factor of O (1 /φ cmg ) by O ( h apsp ). However, since in addition to maintain-ing ApxBall ( G, X, D, . 1) from Algorithm 3, we also maintain ApxBall π ( G, X, str · D, . , β ).Following the same calculation, the total update time becomes˜ O (cid:16) T ApxBall π ( G, K init , str · D, . , β )∆ ( D/d ) h apsp /φ cmg (cid:17) . Note that e min can be maintained using a heap and the total update time can be charged to thetime spent by ApxBall π ( G, X, str · D, . , β ).From the above, we have proved the scattering property and bounded the total update time ofthe algorithm for Theorem III.4.1. 66 II.4.2 Defining The Implicitly Maintained Paths In this section, for each pair of vertices u, v ∈ K , we define a u - v path π ( u, v ) using Algorithm 8.We emphasize that these paths π ( u, v ) are not maintained explicitly and they are completelyindependent from the steadiness index j in the queries. See Figure III.1 for illustration. Algorithm 8: Computing π ( u, v ) for each pair u, v ∈ K . Let e min = ( a, b ) be the edge with minimum steadiness among all edges in G [ B π ] withweight at most 32 D log n . Set π u , π v , π a , π b as π ( X, u ) , π ( X, v ) , π ( X, a ) , π ( X, b ), respectively, which are implicitlymaintained by ApxBall π ( G, X, str · D, . , β ). Let u , v , a , b ∈ X be such that π u = ( u, . . . , u ) , π v = ( v, . . . , v ) , π a = ( a, . . . , a ) , π b = ( b, . . . , b ). Let π Wua be the u - a path in W obtained by querying Prune π ( W multi , φ cmg ). Let b π ua be the u - a path in b H obtained by concatenating, for all embedded edges e ∈ π Wua ,the corresponding path P e in b H . By Definition II.3.3 of b H , we can write b π ua = ( b p , . . . , b p t ) where each b p i is either a heavy-path or b p i = ( z, z ) where z and z areadjacent by a hyperedge in H . for i = 1 up to t do /* (Hyper-edge) */ if b p i = ( z, z ) where z and z are adjacent by a hyperedge in H then p i ← π H ( z, z ) implicitly maintained by H /* (Heavy-path) */ if b p i = ( z, . . . , z ) is a heavy path then p i ← ( z, z ) where ( z, z ) ∈ E ( G ). π ua ← ( p , . . . , p t ). Let π Wbv , b π bv and π bv be the b - v path in W, b H and G , respectively, analogous to π Wua , b π ua , π ua . return π ( u, v ) = ( π u , π ua , π a , { ( a, b ) } , π b , π bv , π v ).Before analysis the properties of π ( u, v ), we first argue that it is indeed well-defined. Proposition III.4.5. For each pair u, v ∈ K , the path π ( u, v ) defined by Algorithm 8 is well-definedand is a u - v path in G .Proof. We have u, v ∈ K ⊆ ApxBall ( G, X, D, . 1) by Line 11 of Algorithm 3. Hence, we alsohave that u, v ∈ ApxBall π ( G, X, str · D, . , β ) and so π u and π v are well-defined. By definition of e min , we have a, b ∈ ApxBall π ( G, X, str · D, . , β ). Hence, π a and π b are well-defined too. Lastly,as u , v , a , b ∈ X , the paths π Wua and π Wbv can be queried from Prune π ( W multi , φ cmg ). Then, b π ua and b π bv are can be defined from π Wua and π Wbv because of the embedding P W of W . By constructionof b H , the paths π ua and π bv are well-defined as well.Since the endpoints of π u , π ua , π a , { ( a, b ) } , π b , π bv , π v are ( u, u ), ( u , a ), ( a , a ), ( a, b ), ( b, b ),( b , v ), ( v , v ), respectively, we have π ( u, v ) is indeed a u - v path. As all subpaths of π ( u, v ) arewell-defined, π ( u, v ) is well-defined too.Next, we introduce notations about more fine-grained structure of the path π ua . (It is symmetricfor π bv .) Consider Line 5 of Algorithm 8 where we have b π ua = ( b p , . . . , b p t ). If b p i is a heavy path,67igure III.1: Illustration of π ( u, v ) returned by Algorithm 8then we say that b p i is of type heavy-path . Otherwise, we say that b p i is of type hyper-edge . Recall that P W is the embedding of W into b H . We can write π Wua = ( e , . . . , e | π Wua | ) and b π ua = ( P e , . . . , P e | πWua | )where each P e j ∈ P W is the path in the embedding corresponding to e j ∈ E ( W ). As P e j is asubpath of b π ua and has endpoints in V ( G ), we can write P e j = ( b p j, , . . . , b p j,t j ) as a subsequence of( b p , . . . , b p t ). Observe that we have b π ua = ( b p , . . . , b p t ) = ( b p , , . . . , b p ,t , b p , , . . . , b p ,t , . . . , b p | π Wua | , , . . . , b p | π Wua | ,t | πWua | ) (III.2) π ua = ( π , . . . π t ) = ( p , , . . . , p ,t , p , , . . . , p ,t , . . . , p | π Wua | , , . . . , p | π Wua | ,t | πWua | ) (III.3)where p j,k is a path in G corresponding to the path b p j,k in b H assigned at Line 11. We emphasizethat the path ( b p j, , . . . , b p j,t j ) is not the same as b p j ; ( b p j, , . . . , b p j,t j ) is just some subsequence of thesequence ( b p , . . . , b p t ). We will usually use subscript i for b p i and subscript ( j, k ) for b p j,k . For eachsubpath b p j,k of P e j , if b p j,k is of type hyper-edge, then we say that the corresponding path p j,k is oftype hyper-edge as well. Otherwise, p j,k is of type heavy-path.We will below argue the correctness of the path π ( u, v ) defined by Algorithm 8. We startby bounding the length of π ( u, v ). We first bound the length of π ua and π bv which is the onlynon-trivial case. The moreover part of the statement below will be used in the next subsection. Proposition III.4.6. We can choose the polylogarithmic factor in str = ˜ O ( γh apsp /φ cmg ) so thatthe following holds. The length of π ua is at most w ( π ua ) = ˜ O ( Dγh apsp /φ cmg ) ≤ str10 · D . Moreover, π ua is contained inside G [ball G ( X, str10 · D )] and every edge of π ua has weight at most D log n .Symmetrically, the same holds for π bv .Proof. We show the argument only for π ua because the argument is symmetric for π bv . Recall thatlen( P W ) is the maximum number of edges inside paths in P W . We have len( P W ) = O ( κ ( b V ) | K init | (cid:15) wit ) =˜ O ( Dd /φ cmg ) because (1) Lemma II.3.5 guarantees that len( P W ) = O ( κ ( b V ) | K init | (cid:15) wit ) as K ⊆ K init and (cid:15) wit = φ cmg / log ( n ) was defined in the same lemma, and (2) we have κ ( b V ) ≤ ˜ O ( | K init | Dd ) by68emma II.3.7. To bound w ( π ua ), observe that the path b π ua contains at most | b π ua | ≤ | π Wua | · len( P W ) = ˜ O ( Dd h apsp /φ cmg ) (III.4)edges in b H where | π Wua | ≤ h apsp by Lemma III.4.2. Next, we will show w ( π ua ) = O ( | b π ua | · dγ ) whichin turn implies that w ( π ua ) = ˜ O ( Dγh apsp /φ cmg ). It suffices to show that w ( p i ) ≤ O ( | b p i | · dγ ) foreach 1 ≤ i ≤ t . There are two cases.• If p i is of type hyper-edge, then w ( p i ) = w ( π H ( z, z )) ≤ dγ by the guarantee of the path-reporting ( d, γ, ∆ , β )-compressed graph H . So w ( p i ) ≤ dγ = dγ | b p i | as b p i = ( z, z ). Also, notethat each edge in π H ( z, z ) obviously has weight at most w ( π H ( z, z )) ≤ dγ ≤ D log n bythe assumption in Theorem III.4.1.• If p i is of type heavy-path, then b p i = ( z, . . . , z ) is a heavy path and we have | b p i | = d w ( z, z ) /d e by the construction of b H . So w ( p i ) = w ( z, z ) = O ( | b p i | d ) again. Also, by construction of b H (see Definition II.3.3), we have ( z, z ) ∈ E ( G ) and w ( z, z ) ≤ D log n .As we can freely choose the polylogarithmic factor in the definition of str = ˜ O ( Dγh apsp /φ cmg ), wecan choose it so that w ( π ua ) ≤ str10 · D . As both endpoints of π ua are inside X , we have that π ua iscontained inside G [ball G ( X, str10 · D )]. From the analysis of the two cases above, we also have thatevery edge in π ua has weight at most 32 D log n .Now, we can bound the length of π ( u, v ) and, hence, bounding the stretch of RobustCore π (as required by Definition III.2.2). Lemma III.4.7. For each pair u, v ∈ K , the path π ( u, v ) defined by Algorithm 8 has length atmost str · D .Proof. We only need bound the length of each path in { π u , π ua , π a , { ( a, b ) } , π b , π bv , π v } . We have π u , π v have length at most 1 . · D ≤ D because u, v ∈ K ⊆ ApxBall ( G, X, D, . 1) by LineLine 11 of Algorithm 3. Next, by definition of e min , π a , π b have length at most 1 . · str10 · D and w ( a, b ) ≤ D log n . Lastly, π ua and π bv have length at most str10 · D by Proposition III.4.6. In total, w ( π ( u, v )) ≤ . · str10 + str10 ) · D + 32 D log n ≤ str · D .Next, we bound the simpleness of π ( u, v ). Lemma III.4.8. For each pair u, v ∈ K , the path π ( u, v ) defined by Algorithm 8 is h apsp ∆ β -simple.Proof. The main task is to prove that π ua is 3 h apsp ∆ β -simple (and the argument for π bv is analo-gous). Given this fact, as π u , π a , π b , π v are β -simple by ApxBall π ( G, X, str · D, . , β ) and { ( a, b ) } is trivially 1-simple, the simpleness of π ( u, v ) = ( π u , π ua , π a , { ( a, b ) } , π b , π bv , π v ) can be at most6 h apsp ∆ β + 4 β + 1 ≤ h apsp ∆ β .Now, we show that π ua is 3 h apsp ∆ β -simple. For each subpath p j,k of π ua from Equation (III.3),note that p j,k is a β -simple path in G because we have either p j,k = π H ( z, z ) is β -simple bysimpleness guarantee of H or p j,k = { ( z, z ) } where ( z, z ) ∈ E ( G ) is trivially 1-simple. The keyclaim is that, for any vertex x ∈ V ( G ) and index j , the number of subpaths from { p j,k } k that x can participate is at most ∆ + 2 (i.e. |{ k | x ∈ p j,k }| ≤ ∆ + 2). As | π Wua | ≤ h apsp by Lemma III.4.2,this would imply that π ua has simpleness at most h apsp (∆ + 2) β ≤ h apsp ∆ β as desired. We finishby proving the claim: Claim III.4.9. For any vertex x ∈ V ( G ) and index j , |{ k | x ∈ p j,k }| ≤ ∆ + 2 . roof. From the assumption of Theorem III.4.1, there are two cases: either H = G unit defined inProposition II.2.12 or H is defined from a path-reporting covering C via Proposition III.2.9. Inboth cases, we will use the fact that P e j is a simple path in b H guaranteed by Lemma II.3.5.Suppose that H = G unit ⊆ G . We claim that ( p j, , . . . , p j,t j ) is a simple path in G and so |{ k | x ∈ p j,k }| ≤ 2. The claim holds because, for each subpath b p j,k of P e j , if b p j,k = ( z, z ) is of typehyper-edge, then p j,k = b p j,k = ( z, z ) ∈ E ( G ), and if b p j,k = ( z, . . . , z ) is of type heavy-path, then p j,k = ( z, z ) ∈ E ( G ). As P e j = ( b p j, , . . . , b p j,t j ) is simple, the path ( p j, , . . . , p j,t j ) must be simple aswell.Next, suppose that H is defined from a path-reporting covering C via Proposition III.2.9. Wefirst argue that |{ k | x ∈ p j,k and p j,k is of type heavy-path }| ≤ 2. To see this, observe thatall type-heavy-path p j,k form a collection of disjoint simple paths in G , which is a subgraph of G with degree at most 2. This is because each heavy path b p j,k = ( z, . . . , z ) in b H correspondsto p j,k = ( z, z ) in G but P e j = ( b p j, , . . . , b p j,t j ) is simple. So x can appear in at most 2 type-heavy-path paths p j,k . It remains to show that |{ k | x ∈ p j,k and p j,k is of type hyper-edge }| ≤ ∆. As P e j is a simple path in b H , each type-hyper-edge b p j,k must correspond to a unique core C j,k from the covering C . Suppose that x ∈ p j,k = π H ( z, z ). By Proposition III.2.9, we have π H ( z, z ) = π C ( z, C j,k ) ◦ π C ( z C , z C ) ◦ π C ( C j,k , z ) where z C , z C ∈ C j,k , π C ( z, C j,k ) = ( z, . . . , z C ) and π C ( C j,k , z ) = ( z C , . . . , z ) are implicitly maintained by ApxBall π that maintains shell ( C j,k ) and π C ( z C , z C ) is implicitly maintained by RobustCore π that maintains the core C j,k in the covering C . By Claim III.3.6 (with different notations), we have that x ∈ shell ( C j,k ). Therefore, the outer-shell participation bound ∆ of C implies that x can appear in at most ∆ type-hyper-edge paths p j,k as desired.Lemma III.4.7 and Lemma III.4.8 together imply that π ( u, v ) indeed satisfies all conditionsrequired by RobustCore π ( G, K init , D, h apsp ∆ β ) with stretch str as required by Definition III.2.2. III.4.3 Threshold-Subpath Queries In this section, we describe in Algorithm 9 below how to process the threshold-subpath query that,given a pair of vertices u, v ∈ K and a steadiness index j , return σ ≤ j ( π ( u, v )) consisting of all edgesof π ( u, v ) with steadiness at most j . Lemma III.4.10. Given u, v ∈ K and a steadiness index j , Algorithm 9 returns σ ≤ j ( π ( u, v )) where π ( u, v ) is defined in Algorithm 8.Proof. Observe that all steps in Algorithm 9 are completely analogous to the steps in Algorithm 8except that we collect only edges with steadiness at most j into the answer and we add Line 4 forefficiency. Thus, we indeed have σ ≤ j ( π ( u, v )) = σ ≤ j ( π u ) ∪ σ ≤ j ( π ua ) ∪ σ ≤ j ( π a ) ∪ { ( a, b ) } ∪ σ ≤ j ( π b ) ∪ σ ≤ j ( π bv ) ∪ σ ≤ j ( π v ) and the answer is correct if Algorithm 9 returns at Line 14. Next, recall that e min is defined as the edge with minimum steadiness among all edges in G [ B π ] with weight atmost 32 D log n . As ball G ( X, str10 · D ) ⊆ B π , this edge set also contains the whole path of π ua and π bv by the “moreover” part of Proposition III.4.6. Thus, if j < σ ( e min ), then σ ≤ j ( π ua ) = ∅ and σ ≤ j ( π bv ) = ∅ . So if Algorithm 9 returns at Line 4, then the answer is correct as well.Recall that ( q φ , q path ) bounds the query-time overhead of both ApxBall π ( G, X, str · D, . , β )and ( d, γ, ∆ , β )-compressed graph H . We will show the query-time overhead for our RobustCore π data structure is (4 q φ , q path + ˜ O ( Dd h apsp /φ cmg ) · q φ ) as required by Theorem III.4.1.70 lgorithm 9: Returning σ ≤ j ( π ( u, v )), given u, v ∈ K and a steadiness index j Let e min = ( a, b ) be the edge with minimum steadiness among all edges in G [ B π ] withweight at most 32 D log n . Set σ ≤ j ( π u ) , σ ≤ j ( π v ) , σ ≤ j ( π a ) , σ ≤ j ( π b ) as σ ≤ j ( π ( X, u )), σ ≤ j ( π ( X, v )), σ ≤ j ( π ( X, a )), σ ≤ j ( π ( X, b )), respectively, by querying ApxBall π ( G, X, str · D, . , β ). Let u , v , a , b ∈ X be such that π u = ( u, . . . , u ), π v = ( v, . . . , v ), π a = ( a, . . . , a ), π b = ( b, . . . , b ). if j < σ ( e min ) then return σ ≤ j ( π ( u, v )) = σ ≤ j ( π u ) ∪ σ ≤ j ( π v ) ∪ σ ≤ j ( π a ) ∪ σ ≤ j ( π b ). Let π Wua be the u - a path in W obtained by querying Prune π ( W multi , φ cmg ). Let b π ua be the u - a path in b H obtained by concatenating, for all embedded edges e ∈ π Wua ,the corresponding path P e in b H . By Definition II.3.3 of b H , we can write b π ua = ( b p , . . . , b p t ) where each b p i is either a heavy-path or b p i = ( z, z ) where z and z areadjacent by a hyperedge in H . for i = 0 up to t do /* (Hyper-edge) */ if b p i = ( z, z ) where z and z are adjacent by a hyperedge in H then σ ≤ j ( p i ) ← σ ≤ j ( π H ( z, z )) by querying the data structure on H . /* (Heavy-path) */ if b p i = ( z, . . . , z ) is a heavy path then σ ≤ j ( p i ) ← σ ≤ j (( z, z )) where ( z, z ) ∈ E ( G ). σ ≤ j ( π ua ) = σ ≤ j ( p ) ∪ · · · ∪ σ ≤ j ( p t ) Let π Wbv , b π bv and σ ≤ j ( π bv ) be analogous to π Wua , b π ua , σ ≤ j ( π bv ), respectively. return σ ≤ j ( π u ) ∪ σ ≤ j ( π ua ) ∪ σ ≤ j ( π a ) ∪ { ( a, b ) } ∪ σ ≤ j ( π b ) ∪ σ ≤ j ( π bv ) ∪ σ ≤ j ( π v ).71 emma III.4.11. Given u, v ∈ K and a steadiness index j , Algorithm 9 takes q φ time if σ ≤ j ( π ( u, v )) = ∅ . Otherwise, it takes at most | σ ≤ j ( π ( u, v )) | · ( q path + ˜ O ( Dd h apsp /φ cmg ) · q φ ) time.Proof. If σ ≤ j ( π ( u, v )) = ∅ , then Algorithm 9 must return at Line 4 (otherwise e min = ( a, b ) ∈ σ ≤ j ( π ( u, v ))). In this case, we just query ApxBall π ( G, X, str · D, . , β ) four times which takes atmost 4 q φ time.Now suppose that σ ≤ j ( π ( u, v )) = ∅ . At Line 1 we make path-query to ApxBall π ( G, X, str · D, . , β ) at most 4 times. At Line 5, it takes O ( h apsp ) time to obtain π Wua . Constructing b π ua takes | b π ua | ≤ ˜ O ( Dd h apsp /φ cmg ) time by Equation (III.4). At Line 13, the algorithm makes at most | b π ua | queries to ( d, γ, ∆ , β )-compressed graph H (for the hyper-edge case) and spends additional O ( | b π ua | )time (for the heavy-path case) to obtain σ ≤ j ( π ua ). We do the same to obtain π Wbv , b π bv and σ ≤ j ( π bv ).In total the running time is at most | σ ≤ j ( π ( u, v )) | · q path + (4 + | b π ua | ) q φ + O ( h apsp + | b π ua | ) ≤ | σ ≤ j ( π ( u, v )) | · ( q path + ˜ O ( Dd h apsp /φ cmg ) · q φ )where the first term is the total query time to both ApxBall π ( G, X, str · D, . , β ) and H whenthey return non-empty subpaths, the second term is the total query time that ApxBall π ( G, X, str · D, . , β ) and H when they return an empty set. The inequality holds is because | σ ≤ j ( π ( u, v )) | ≥ III.5 Putting Path-reporting Components Together In this section, we show how to recursively combine all path-reporting data structures including ApxBall π (Definition III.2.1), RobustCore π (Definition III.2.2), and path-reporting Covering(Definition III.2.4) to obtain the desired decremental path-reporting SSSP π data structure. Thegoal of this section is to prove the following theorem. Theorem III.5.1. For any n and (cid:15) ∈ ( φ cmg , / , let G = ( V, E ) be a decremental bounded-degree graph with n vertices, edge weights from { , , . . . , W = n } , and edge steadiness from { , . . . , σ max } where σ max = o (log / n ) . Let S ⊆ V be any decremental set. We can implement ApxBall π ( G, S, (cid:15), b O (1)) that has b O ( n ) total update time and query-time overhead of ( O (log n ) , b O (1)) . As SSSP π is a special case of ApxBall π when the source set S = { s } , by applying the reductionfrom Proposition III.1.1, we immediately obtain Theorem III.0.2, the main result of this part ofthe paper. It remains to prove Theorem III.5.1.Define G j = σ ≥ j ( G ) for each j ∈ { , . . . , σ max } . Note that G = G and G σ max +1 = ∅ . Thereare distance scales D ≤ D ≤ · · · ≤ D ds where D i = ( nW ) i/ ds and ds = c ds lg lg lg n for somesmall constant c ds > 0. We will implement our data structures for ds many levels. Recall that φ cmg = 1 / Θ(log / n ) = b Ω(1) and h apsp = 2 Θ(log / n ) = b O (1). For 0 ≤ i ≤ ds and 0 ≤ j ≤ σ max , weset k i = (lg lg n ) i γ i = h k i +1 apsp and γ − = 1 (cid:15) i,j = (cid:15)/ (600 ds − i · j ) and (cid:15) i,σ max +1 = 0str i = γ i − · h apsp log c str n/φ cmg ∆ i = Θ( k i n /k i /φ cmg ) β i = β i − · h apsp ∆ i − and β = 7 h apsp c str to be a large enough constant. We also define parameters related to query-timeoverhead, for 0 ≤ i ≤ ds and 0 ≤ j ≤ σ max , as follows: q ( i,j ) φ = c q ( σ max − j + 1) log nQ ( i,j ) φ = c q ( σ max − j + 1)12 i log n overhead path = n / ds · ds · c q · log / n q ( i,j )path = (2 i + 1) · ( σ max − j + 1) · overhead path Q ( i,j )path = 2( i + 1) · ( σ max − j + 1) · overhead path where c q is a large enough constant. The parameters are defined in such that way that n / ds , D i D i − , n /k i , γ i , /(cid:15) i,j , str i , ∆ i , β i , q ( i,j ) φ , Q ( i,j ) φ , q ( i,j )path , Q ( i,j )path = b O (1)for all 0 ≤ i ≤ ds and 0 ≤ j ≤ σ max . However, we will need a more fine-grained property of themas described below. Proposition III.5.2. For large enough n and for all ≤ i ≤ ds and ≤ j ≤ σ max , we have that 1. lg lg n ≤ k i ≤ (lg / n ),2. φ cmg ≤ (cid:15) i,j ≤ (cid:15) and (cid:15) i,j = 300 (cid:15) i − ,j + (cid:15) i,j +1 (in particular, (cid:15) i,j ≥ (cid:15) i − ,j , (cid:15) i,j +1 and (cid:15) i,j ≥ (cid:15) i,σ max ≥ (cid:15) ,σ max ),3. γ i , str i = 2 O (lg / n ) ,4. D i /D i − ≤ n / ds ,5. γ i ≤ D i /D i − ,6. γ i ≥ ( γ i − h apsp ) k i ≥ ( str i (cid:15) i,σ max ) k i , and7. β i = 2 O ((log / n )(log log log n )) · n O (1 / lg lg n ) . Proof. (1): We have k i = (lg lg n ) i ≤ (lg lg n ) ds ≤ (lg lg n ) (lg lg n ) / ≤ lg / n as ds = c ds lg lg lg n and c ds is a small enough constant.(2): It is clear that (cid:15) i,j ≤ (cid:15) . For the other direction, note that in the assumption of Theo-rem II.6.1, we have (cid:15) ≥ φ cmg . So (cid:15) i ≥ (cid:15)/ (600 ds · σ max ) ≥ φ cmg / Θ(lg lg lg n + σ max ) ≥ φ cmg because φ cmg = 1 / Θ(lg / n ) and σ max = o (log / n ). Next, we have 300 (cid:15) i − ,j + (cid:15) i,j +1 = (cid:15) ( ds − ( i − · j + ds − i · j +1 ) = (cid:15) ( / ds − i · j + / ds − i · j ) = (cid:15) ( ds − i · j ) = (cid:15) i,j .(3): As h apsp = 2 Θ(lg / n ) and γ i = h k i +1 apsp , we have from Item 1 of this proposition that γ i = 2 O (lg / / n ) = 2 O (log / n ) . Also, as str i = γ i − · h apsp log c str n/φ cmg , we have str i = 2 O (log / n ) too.(4): We have D i /D i − = ( nW ) / ds . Since W = n , we have D i /D i − ≤ n / ds .(5): As D i /D i − ≥ n / ds ≥ Θ(lg n/ lg lg lg n ) , by Item 3 we have γ i ≤ ( D i /D i − ) when n is largeenough.(6): We have γ i ≥ ( γ i − h apsp ) k i because γ i = h apsp k i +1 ≥ h k i + k i ) apsp = ( h k i apsp · h apsp ) k i = ( γ i − h apsp ) k i where the inequality holds is because k i +1 = (lg lg n ) i +1 = (lg lg n ) i · × (lg lg n ) i ≥ (lg lg n ) i · +(lg lg n ) i = k i + k i for all i ≥ 0. For the second inequality, we havestr i (cid:15) i,σ max = γ i − h apsp log c str n/φ cmg (cid:15) i,σ max ≤ γ i − h apsp φ cmg ≤ γ i − h apsp c str n ) ≤ /φ cmg , (cid:15) i,σ max ≥ φ cmg by Item 2, and h apsp ≥ poly(1 /φ cmg ) when n is largeenough. Therefore, γ i ≥ ( γ i − h apsp ) k i ≥ ( str i (cid:15) i,σ max ) k i .(7): We have β i ≤ β ds = Q ds i =0 ˜ O ( h apsp n /k i /φ cmg ) = 2 O ((log / n )(log log log n )) Q ds i =0 n /k i by defini-tion of h apsp , φ cmg and ds . As P dist i =0 / (lg lg n ) i = O (1 / lg lg n ), we have Q ds i =0 n /k i = n O (1 / lg lg n ) .Therefore, β i = 2 O ((log / n )(log log log n )) · n O (1 / lg lg n ) = b O (1).As the path-reporting ApxBall π will call the distance-only ApxBall as a black-box, we will needthe following bound. Proposition III.5.3. For any d ≤ nW and (cid:15) where (cid:15) ≥ (cid:15) ,σ max , we have T ApxBall ( G, S, d , (cid:15) ) =˜ O ( | ball G ( S, d ) | n /k / ds ds φ cmg (cid:15) ,σ max ) .Proof. This follows from Theorem II.6.1 when we set the accuracy parameter (cid:15) ← (cid:15) ,σ max (weuse (cid:15) instead of (cid:15) to avoid confusion). Note that (cid:15) ,σ max ≥ φ cmg satisfying Theorem II.6.1.In the proof of Theorem II.6.1, there are parameters (cid:15) ds = (cid:15) = (cid:15) ,σ max and (cid:15) = (cid:15) / ds = (cid:15) ,σ max / ds . From Item 1 when i = ds , as d ≤ nW ≤ d ds , we have T ApxBall ( G, S, d , (cid:15) ds ) ≤ ˜ O ( | ball G ( S, d ) | n /k / ds φ cmg ( (cid:15) ) ) = ˜ O ( | ball G ( S, d ) | n /k / ds ds φ cmg (cid:15) ,σ max ).Now, we are ready to state the key inductive lemma that combines everything together. Lemma III.5.4. For every ≤ i ≤ ds and ≤ j ≤ σ max , we can maintain the following datastructures:1. ApxBall π ( G j , S, d , (cid:15) i,j , β i ) for any d ≤ d i,j (cid:44) str i · D i +1 /(cid:15) i,j using total update time of (cid:12)(cid:12)(cid:12) ball G j ( S, d ) (cid:12)(cid:12)(cid:12) poly (cid:16) n /k +1 / ds ds + σ max +log / n (cid:17) = b O ( | ball G j ( S, d ) | ) with query-time overhead at most ( q ( i,j ) φ , q ( i,j )path ) .2. RobustCore π ( G j , K init , d , β i ) for any d ≤ D i +1 using total update time of (cid:12)(cid:12)(cid:12) ball G j ( K init , str i d ) (cid:12)(cid:12)(cid:12) poly (cid:16) n /k +1 / ds ds + σ max +log / n (cid:17) = b O ( | ball G j ( K init , d log n ) | ) with scattering parameter δ scatter = ˜Ω( φ cmg ) , stretch at most str i , and query-time overhead atmost ( Q ( i,j ) φ , Q ( i,j )path ) .3. ( D i , k i , (cid:15) i,j , str i , ∆ i , β i ) -covering of G j using total update time of ˜ O ( n · poly (cid:16) n /k +1 / ds ds + σ max +log / n (cid:17) ) = b O ( n ) with query-time overhead at most ( Q ( i,j ) φ , Q ( i,j )path ) .For all i > , we assume by induction that a ( D i − , k i − , (cid:15) i − ,j , str i − , ∆ i − ) -covering of G j isalready explicitly maintained for every ≤ j ≤ σ max . The rest of the section is for proving Lemma III.5.4. Before proving Lemma III.5.4, we provethe main theorem (Theorem III.5.1) using it. Proof of Theorem III.5.1. We apply Lemma III.5.4 for i = ds and j = 0. Recall that G = G j , (cid:15) = (cid:15) ds , , β ds = b O (1) by Proposition III.5.2(7), q ( ds , φ = c q σ max log n = O (log n ) and q ( i,j )path =(2 ds +1) · σ max · overhead path = b O (1). Therefore, Lemma III.5.4 gives us ApxBall π ( G, S, d , (cid:15), b O (1))data structure with b O ( | ball G ( S, d ) | ) = b O ( n ) total update time and ( O (log n ) , b O (1)) query-timeoverhead as desired. 74 II.5.1 Bounds for ApxBall π The proof is by induction on i (starting from 0 to ds ) and then on j (starting from σ max to 0). Wewill show that T ApxBall π ( G j , S, d , (cid:15) i,j , β i ) = (4 i ( σ max − j ) ) · (cid:12)(cid:12)(cid:12) ball G j ( S, d ) (cid:12)(cid:12)(cid:12) · n /k +12 / ds ds c (log / n ) for any d ≤ d i,j where c is some large enough constant, which implies the claimed bound of (cid:12)(cid:12)(cid:12) ball G j ( S, d ) (cid:12)(cid:12)(cid:12) poly (cid:16) n /k +1 / ds ds + σ max +log / n (cid:17) . Base Cases ( i = 0 ). For i = 0 and any j ∈ [0 , σ max ], the path-reporting ES-tree from Proposi-tion III.2.11 has total update time at most T ApxBall π ( G j , S, d , (cid:15) ,j , β ) ≤ O ( (cid:12)(cid:12)(cid:12) ball G j ( S, d ) (cid:12)(cid:12)(cid:12) d log n ) ≤ (cid:12)(cid:12)(cid:12) ball G j ( S, d ) (cid:12)(cid:12)(cid:12) · O ( D str log n(cid:15) ,σ max ) ≤ (cid:12)(cid:12)(cid:12) ball G j ( S, d ) (cid:12)(cid:12)(cid:12) · (4 i ( σ max − j ) ) · n /k +12 / ds ds c (log / n ) . and query-time overhead of ( O (log n ) , O ( d log n )) ≤ ( q ( i,j ) φ , q ( i,j )path ) . The Inductive Step. Below, we assume that i > j < σ max . (The proof for another basecase when j = σ max is exactly the same as below but simpler, because we can ignore all termsrelated to G j +1 as G σ max +1 = ∅ .) We assume d > d i − ,j (cid:44) str i − D i /(cid:15) i − ,j otherwise we are doneby induction hypothesis.Total Update Time: As path-reporting ( D i − , k i − , (cid:15) i − ,j , str i − , ∆ i − , β i − )-covering of G j is al-ready explicitly maintained, we can implement ApxBall π ( G j , S, d , (cid:15) i,j , β i ) where (cid:15) i,j = 300 (cid:15) i − ,j + (cid:15) i,j +1 and β i ≥ i − β i − via Theorem III.3.1 using total update time of T ApxBall π ( G j , S, d , (cid:15) i,j , β i ) ≤ ˜ O ( (cid:12)(cid:12)(cid:12) ball G j ( S, d ) (cid:12)(cid:12)(cid:12) ∆ i − (str i · D i +1 /(cid:15) i,j ) (cid:15) i − ,j D i − ) + T ApxBall π ( G j , S, 2( str i − (cid:15) i − ,j ) k i − D i − , (cid:15) i − ,j , β i − )+ T ApxBall π ( G j +1 , S, d , (cid:15) i,j +1 , β i ) + T ApxBall ( G j +1 , S, d , (cid:15) i − ,j ) . We will prove that 2( str i − (cid:15) i − ,j ) k i − D i − , (cid:15) i − ,j ≤ d i − ,j so that we can apply induction hypothe-sis on T ApxBall π ( G j , S, str i − (cid:15) i − ,j ) k i − D i − , (cid:15) i − ,j , β i − ). To see this, note that D i ≥ γ i D i − ≥ ( str i (cid:15) i,σ max ) k i D i − by Proposition III.5.2(5, 6). So d i − ,j = str i − (cid:15) i − ,j D i ≥ str i − (cid:15) i − ,j · ( str i (cid:15) i,σ max ) k i D i − ≥ 2( str i − (cid:15) i − ,j ) k i − D i − where the last inequality is because str i − (cid:15) i − ,j ≥ k i ≥ k i − and str i (cid:15) i,σ max ≥ str i (cid:15) i,j ≥ str i − (cid:15) i − ,j (because str i str i − ≥ 600 = (cid:15) i,j (cid:15) i − ,j ). Also, to apply induction hypothesis on T ApxBall π ( G j +1 , S, d , (cid:15) i,j +1 , β i ), wenote that d ≤ d i,j ≤ d i,j +1 because (cid:15) i,j ≥ (cid:15) i,j +1 by Proposition III.5.2(2). Therefore, the bound on T ApxBall π ( G j , S, d , (cid:15) i,j , β i ) is at most 75 O ( (cid:12)(cid:12)(cid:12) ball G j ( S, d ) (cid:12)(cid:12)(cid:12) n /k i − +12 / ds c (log / n ) ) + T ApxBall π ( G j , S, d i − i,j , (cid:15) i − ,j , β i − )+ T ApxBall π ( G j +1 , S, d i,j +1 , (cid:15) i,j +1 , β i ) + T ApxBall ( G j +1 , S, d , (cid:15) i − ,j ) . ≤ (cid:12)(cid:12)(cid:12) ball G j ( S, d ) (cid:12)(cid:12)(cid:12) n /k +12 / ds c (log / n ) + (4 i − ( σ max − j ) ) · (cid:12)(cid:12)(cid:12) ball G j ( S, d ) (cid:12)(cid:12)(cid:12) · n /k +12 / ds ds c (log / n ) (4 i ( σ max − ( j +1)) ) · (cid:12)(cid:12)(cid:12) ball G j +1 ( S, d ) (cid:12)(cid:12)(cid:12) · n /k +12 / ds ds c (log / n ) + (cid:12)(cid:12)(cid:12) ball G j +1 ( S, d ) (cid:12)(cid:12)(cid:12) n /k +12 / ds ds c (log / n ) for a constant c ≤ (2 + 2 · (4 i − ( σ max − ( j +1)) + 4 · (4 i − ( σ max − ( j +1)) ) × (cid:12)(cid:12)(cid:12) ball G j ( S, d ) (cid:12)(cid:12)(cid:12) n /k +12 / ds ds c (log / n ) ≤ i ( σ max − j ) (cid:12)(cid:12)(cid:12) ball G j ( S, d ) (cid:12)(cid:12)(cid:12) n /k +12 / ds ds c (log / n ) where the first inequality is by induction hypothesis and by Proposition III.5.3, and the secondinequality is because G j +1 ⊆ G j and c ≥ c as c is chosen to be large enough. This completes theinductive step for update time.Query-time Overhead: Since 2( str i − (cid:15) i − ,j ) k i − D i − ≤ d i − ,j and by induction hypothesis, we have ApxBall π ( G j , S, str i − (cid:15) i − ,j ) k i − D i − , (cid:15) i − ,j , β i − ) has query-time overhead at most ( q ( i − ,j ) φ , q ( i − ,j )path ) ≤ ( Q ( i − ,j ) φ , Q ( i − ,j )path ). Also, the path-reporting ( D i − , k i − , (cid:15) i − ,j , str i − , ∆ i − , β i − )-covering of G j hasquery-time overhead at most ( Q ( i − ,j ) φ , Q ( i − ,j )path ) by induction hypothesis. Lastly, the query-timeoverhead of ApxBall π ( G j +1 , S, d , (cid:15) i,j +1 , β i ) is at most ( q ( i,j +1) φ , q ( i,j +1)path ) because d ≤ d i,j ≤ d i,j +1 .Therefore, by Theorem III.3.1, the query-time overhead of ApxBall π ( G j , S, d , (cid:15) i,j , β i ) is at most( q ( i,j +1) φ + O (1) , max { q ( i,j +1)path + O (1) , Q ( i − ,j )path + O ( d (cid:15) i,j D i − ) · Q ( i − ,j ) φ } ) ≤ ( q ( i,j ) φ , q ( i,j )path ) . To see why the inequalities hold, we assume that c q is a large enough constant. So, we have q ( i,j +1) φ + O (1) ≤ c q ( σ max − j ) log n + c q ≤ q ( i,j ) φ . Also, q ( i,j +1)path + O (1) ≤ (2 i +1) · ( σ max − j ) · overhead path + c q ≤ (2 i +1) · ( σ max − j +1) · overhead path = q ( i,j )path . Finally, Q ( i − ,j )path + O ( d (cid:15) i,j D i − ) · Q ( i − ,j ) φ ≤ i · ( σ max − j + 1) · overhead path + O ( n / ds str i (cid:15) ,σ max ) · ( c q σ max ds log n ) ≤ i · ( σ max − j + 1) · overhead path + overhead path = (2 i + 1) · ( σ max − j + 1) · overhead path = q ( i,j )path where overhead path = n / ds · ds · c q · log / n . III.5.2 Bounds for RobustCore π The proof is by induction on i (starting from 0 to ds ) and we can fix any j . Base Cases ( i = 0 ). For i = 0 and any j ∈ [0 , σ max ], we have that a path-reporting (1 , , O (1) , G j can be trivially maintained by Proposition III.2.10. By Theorem III.4.1and since β ≥ h apsp (by definition of β ), we can implement RobustCore π ( G j , K init , d , β i ) with76cattering parameter δ scatter = ˜Ω( φ cmg ) and stretch at most ˜ O ( γh apsp /φ cmg ) ≤ str (by definitionof str ) with total update time˜ O (cid:16) T ApxBall π ( G j , K init , str d , . , β )( D ) h apsp /φ cmg (cid:17) = (cid:12)(cid:12)(cid:12) ball G j ( K init , str d ) (cid:12)(cid:12)(cid:12) poly (cid:16) n /k +1 / ds log / n (cid:17) by the ES-tree from Proposition III.2.11. As the query-time overhead of the (1 , , O (1) , , 1) by Proposition III.2.10 and, by Proposition III.2.11, the query-time overhead of ApxBall π ( G j , S, d , . , β i ) is at most ( O (log n ) , O ( d log n )). The query-time overhead of RobustCore π ( G j , K init , d , β i ) is at most ( O (log n ) , ˜ O ( n /k h apsp /φ cmg )) ≤ ( Q ( i,j ) φ , Q ( i,j )path ). The Inductive Step. Total Update Time: For i > j ∈ [0 , σ max ], given thata path-reporting ( D i − , k i − , (cid:15) i − ,j , str i − , ∆ i − , β i − )-covering of G j is explicitly maintained, byProposition III.2.9, we can automatically maintain a ( D i − , γ i − , ∆ i − , β i − )-compressed graphwhere γ i − ≥ (str i − /(cid:15) i − ,j ) k i − by Proposition II.6.2(6) and because (cid:15) ,σ max ≤ (cid:15) i − ,j . By Theo-rem III.4.1 and since β i ≥ h apsp ∆ i − · (3 β i − ), we can maintain RobustCore π ( G j , K init , d , β i )with δ scatter = ˜Ω( φ cmg ) and ˜ O ( γ i − h apsp /φ cmg ) ≤ str i (by definition of str i ) with total update time˜ O (cid:16) T ApxBall π ( G j , K init , str i d , . i − ( D i +1 /D i − ) h apsp /φ cmg (cid:17) = (cid:12)(cid:12)(cid:12) ball G j ( K init , str i d ) (cid:12)(cid:12)(cid:12) poly (cid:16) n /k +1 / ds ds + σ max +log / n (cid:17) by Item 1 of Lemma III.5.4.Query-time Overhead: By Item 1 of Lemma III.5.4, the query-time overhead of ApxBall π ( G j , S, str i d , . , β i )is at most ( q ( i,j ) φ , q ( i,j )path ). By induction hypothesis, the path-reporting ( D i − , k i − , (cid:15) i − ,j , str i − , ∆ i − , β i − )-covering has query-time overhead of ( Q ( i − ,j ) φ , Q ( i − ,j )path ) and so the query-time overhead of the( D i − , γ i − , ∆ i − , β i − )-compressed graph is at most (3 Q ( i − ,j ) φ , Q ( i − ,j )path + 2 Q ( i − ,j ) φ ) by Propo-sition III.2.9. Let Q φ = max { q ( i,j ) φ , Q ( i − ,j ) φ } and Q path = max { q ( i,j )path , Q ( i − ,j )path + 2 Q ( i − ,j ) φ } . ByTheorem III.4.1, we have that the query-time overhead of RobustCore π ( G j , K init , d , β i ) is atmost (4 Q φ , Q path + ˜ O ( d D i − h apsp /φ cmg ) · Q φ ) ≤ ( Q ( i,j ) φ , Q ( i,j )path ) . To see why the inequalities holds, we first note that Q φ = 3 Q ( i − ,j ) φ because q ( i,j ) φ = c q ( σ max − j +1) log n ≤ c q ( σ max − j + 1)12 i − log n = Q ( i − ,j ) φ . So, we have4 Q φ = 4 · max { q ( i,j ) φ , Q ( i − ,j ) φ } = 4 · Q ( i − ,j ) φ = Q ( i,j ) φ . Also, we have Q path + ˜ O ( d D i − h apsp /φ cmg ) · Q φ ≤ max { q ( i,j )path , Q ( i − ,j )path + 2 Q ( i − ,j ) φ } + ˜ O ( n / ds h apsp φ cmg ) · Q ( i − ,j ) φ ≤ max { q ( i,j )path + overhead path , Q ( i − ,j )path + overhead path }≤ i + 1) · ( σ max − j + 1) · overhead path = Q ( i,j )path where overhead path = n / ds · ds · c q · log / n . 77 II.5.3 Bounds for Path-reporting Covering Recall that the algorithm from Theorem III.2.8 for maintaining a path-reporting ( D i , k i , (cid:15) i,j , str i , ∆ i , β i )-covering of G j assumes, for all D i ≤ d ≤ D i ( str i (cid:15) i,j ) k , RobustCore π and ApxBall π data structureswith input distance parameter d . By Item 1 and Item 2 of Lemma III.5.4, we can indeed implementthese data structures for any distance parameter d ≤ D i +1 . Since D i ( str i (cid:15) i,j ) k i ≤ D i γ i ≤ D i +1 byProposition II.6.2(5,6), the assumption is indeed satisfied by Item 1 and Item 2 of Lemma III.5.4.So, using Theorem III.2.8, we can maintain a path-reporting ( D i , k i , (cid:15) i,j , str i , ∆ i , β i )-covering of G j with ∆ i = Θ( k i n /k i /δ scatter ) in total update time of O ( k i n /k i log n/δ scatter + X C ∈C ALL T RobustCore π ( G ( t C ) j , C ( t C ) , d ‘ core ( C ) , β i )+ T ApxBall π ( G ( t C ) j , C ( t C ) , str i (cid:15) i,j d ‘ core ( C ) , (cid:15) i,j , β i ))where C ALL contains all cores that have ever been initialized and, for each C ∈ C ALL , t C is the time C is initialized. By plugging in the total update time of ApxBall π from Item 1 and RobustCore π from Item 2, the total update time for maintaining the covering is˜ O ( n /k i δ scatter + X C ∈C ALL (cid:12)(cid:12)(cid:12)(cid:12) ball G ( tC ) j ( C ( t C ) , str i d ‘ core ( C ) ) (cid:12)(cid:12)(cid:12)(cid:12) poly (cid:16) n /k +1 / ds ds + σ max +log / n (cid:17) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ball G ( tC ) j ( C ( t C ) , str i (cid:15) i,j d ‘ core ( C ) ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) poly (cid:16) n /k +1 / ds ds + σ max +log / n (cid:17) ) . As it is guaranteed by Theorem III.2.8 that X C ∈C ALL | ball G ( tC ) j ( C ( t C ) , str i (cid:15) i,j d ‘ core ( C ) ) | ≤ O ( k i n /k i /δ scatter ) , and therefore the above expression simplifies to ˜ O (cid:16) n · poly (cid:16) n /k +1 / ds ds + σ max +log / n (cid:17)(cid:17) . As thequery-time overhead of all invoked instances of RobustCore π and ApxBall π is at most ( Q ( i,jφ , Q ( i,j )path )by Item 1 and Item 2 of Lemma III.5.4, the query-time overhead of the covering C is at most( Q ( i,jφ , Q ( i,j )path ) by definition. 78 art IV: Approximate Min-Cost Flow In this part of the paper, we are concerned with the problem of maximum bounded cost flow(MBCF) and the min-cost flow problem. In both problems, the input is a graph G = ( V, E, c, u )where c is the cost function and u the capacity function both taken over edges and vertices; and asource vertex s and a sink vertex t . In MBCF, the algorithm is further given a cost budget C . TheMBCF problem is then to find the maximum feasible flow with regard to capacities and cost budget C , i.e. a flow of cost at most C where no edge or vertex carries more flow than stipulated by thecapacity function (for precise definitions of these properties, we refer the reader to the additionalpreliminary section IV.1).The main result of this section is our main theorem on flow. Theorem I.1.2 (Approximate Mixed-Capacitated Min-Cost Flow) . For any (cid:15) > / polylog( n ) ,consider undirected graph G = ( V, E, c, u ) , where cost function c and capacity function u map eachedge and vertex to a non-negative real. Let s, t ∈ V be source and sink vertices. Then, there is analgorithm that in m o (1) log log C time returns a feasible flow f that sends a (1 − (cid:15) ) -fraction ofthe max flow value from s to t with cost at most equal to the min-cost flow. The algorithm runscorrectly with high probability. Since we can derive a (1 − (cid:15) )-approximate min-cost flow algorithm from an algorithm for MBCFby trying ˜ O (log log C ) cost budget values (by performing binary search over every power of (1 + (cid:15) )smaller than C ), we will focus for the rest of this section on the problem of MBCF and only returnto the min-cost flow in the final discussion. We now state our final result for the MBCF problem. Theorem IV.0.1. For any (cid:15) > / polylog( n ) , given an undirected graph G = ( V, E, c, u ) , a sourcevertex s and a sink vertex t , and a cost budget C . Let OP T G,C be the maximum value of any s - t feasible flow of cost at most C .Then, there exists an algorithm that can return a feasible flow f and is of value at least (1 − (cid:15) ) OP T G,C . The algorithm can compute f in time m o (1) and runs correctly with probability atleast − n − . We derive the result stated in Theorem IV.0.1 by a series of reductions. We start this sectionby stating some additional preliminaries and defining some crucial concepts. In Section IV.2, wethen discuss the problem of MBCF in more detail and state formally our reductions which providesa roadmap for the rest of this chapter.We recommend the reader to read the overview in Section I.3 before reading the rest of thissection, as it contains high-level intuition for our overall approach. We can also route an arbitrary demand vector, see an alternative statement in Appendix A.1.2. V.1 Additional Preliminaries We sometimes use Exp ( x ) in place of e x to avoid clutter. We use d x e α to denote x rounded up tothe nearest power of α . Flows and Cuts. Throughout this section, let G = ( V, E, c, u ) be an undirected graph with costfunction c and capacity function u and assume that two distinguished vertices s and t are givenalong with a cost budget C . As we will show at the end of the preliminary section, we can assumew.l.o.g. that c and u are only defined over the vertices . We define C and U to be the max-minratios of functions c and u , respectively. For convenience, we model G as a graph where all edgesare bidirectional: that is, ( x, y ) ∈ E iff ( y, x ) ∈ E (we get c ( x, y ) = c ( y, x ) and u ( x, y ) = u ( y, x )).We say that a vector f ∈ R E + is a flow if it assigns flow mass f ( x, y ) ≥ x, y ) ∈ E .Slightly non-standardly, we do not assume skew-symmetry. Flow Properties. We further define the in-flow and the out-flow at a vertex x ∈ V byin f ( x ) = X y ∈ V f ( y, x ) and out f ( x ) = X y ∈ V f ( x, y ) . Note that flow on the anti-parallel edges ( x, y ) and ( y, x ) is not canceled by this definition.We say that a flow f satisfies flow conservation constraints, if for every x ∈ V \ { s, t } , we havein f ( x ) = out f ( x ). We further say that a flow f is satisfies capacity constraints (or is capacity-feasible) if for every x ∈ V , in f ( u ) ≤ u ( x ).The cost of a flow is defined to be c ( f ) = X v in f ( v ) · c ( v ) , where in f ( v ) · c ( v ) captures of the cost of the flow going through vertex v . Observe that in a feasible s - t flow, the vertex s on each flow path has no flow going into the vertex, and we therefore do notattribute any cost to s . Note also that if the flow f obeys conservation constraints (but at s and t ), then in f ( v ) = out( v ) precisely captures the flow through v . We use the definition of the costeven for flows which do not satisfy conservation constraints.Then, we say that a flow f is cost-feasible if c ( f ) ≤ C . We say a flow f is a pseudo-flow, if itis capacity- and cost-feasible. We say that f is a feasible flow if it is a pseudo-flow and f satisfiesconservation constraints. For a feasible flow f , we say that the value of the flow is the amount offlow sent from s to t , or more formally in f ( t ) − out f ( t ). (Near-)Optimality. Given a graph G , vertices s, t and a cost budget C , we let OP T G,C denotethe maximum flow value achieved by any feasible flow. We also define a notion of near-optimalflows. Definition IV.1.1. [Near-Optimality] For any (cid:15) > , given a graph G , source and sink vertices s, t ∈ V and a cost budget C , we say that a flow f is (1 − (cid:15) ) -optimal if the flow f is cost-feasibleand of value at least (1 − (cid:15) ) OP T G,C . Reduction to Vertex-Capacities Only. Finally, we formally state a reduction from graphs G with mixed capacities and costs to vertex capacities only. The reduction also enforces someadditional desirable properties that we henceforth assume. The proof of Proposition IV.1.2 can befound in Appendix A.4.1. 80 roposition IV.1.2. Given G = ( V, E, c, u ) with as defined above with capacities and costs takenover E ∪ V , C and /n < (cid:15) < and m ≥ . Then, there is a G = ( V , E , c , u ) with s and t and C = 32 m such that:1. ( x, y ) ∈ E iff ( y, x ) ∈ E . Further, for each ( x, y ) ∈ E , c ( x, y ) = 0 and u ( x, y ) = ∞ , and2. c ( s ) = c ( t ) = 0 , and3. V is of size n G ≤ m + n + 2 , E of size m G ≤ m + 4 , and4. for each vertex x ∈ V ( G ) , c ( x ) ∈ [1 , m ] ∪ { } and u ( x ) ∈ [1 , m ] , and5. there is a map M G → G that maps any (1 − (cid:15) ) -optimal s - t flow f in G to a (1 − (cid:15) ) -optimal s - t flow f in G . The flow map can be applied in O ( m ) time and G can be computed in O ( m log n ) time. Exponential Distribution. We make use of the exponential distribution with parameter λ > X with cumulative distribution function Pr[ X ≤ x ] = 1 − e − λx forall x ≥ 0, which we denote by the shorthand X ∼ Exp ( λ ). A Path-reporting SSSP Structure. Finally, we need a data structure akin to the one definedin Definition III.0.1 and implemented by Theorem III.0.2. Before stating the definition, we startwith some preliminaries.Here, we consider an undirected graph G = ( V, E, w, σ ) that we again model by having an edge( x, y ) ∈ E iff ( y, x ) ∈ E . For any path P in G , we assume that the edges used in P are directedcorrectly along P , i.e. P consist of edges ( v , v ) , ( v , v ) , ( v , v ) , . . . . For each vertex v , we havea weight w ( v ), and we define the weight of a path P in G induced by w by w ( P ) = P ( u,v ) ∈ P w ( v )(i.e. only the tail vertex of each edge is accounted for).Each edge e ∈ E is assigned integral steadiness σ ( e ) ∈ [1 , τ ], for some parameter τ . For any multi -set E ⊆ E and j , we let σ ≤ j ( E ) = { e ∈ E | σ ( e ) ≤ j } contain all edges from E of steadinessat most j . A path P is β -edge-simple if each edge appears in P at most β times. When P is a(non-simple) path, σ ≤ j ( P ) is a multi-set containing all occurrences of edges with steadiness at most j in P . Definition IV.1.3 (Path-reporting SSSP) . Given a decremental graph G = ( V, E, w, σ ) , some τ ≥ such that σ ( e ) ∈ [1 , τ ] for each e ∈ E , a simpleness parameter β ≥ , a source and sinkvertex s, t ∈ V with w ( s ) = w ( t ) = 0 , a distance approximation parameter (cid:15) > . Then, we saythat a data structure SSSP π ( G, s, t, (cid:15), β ) is a Path-reporting SSSP Structure if • t is associated with a β -edge-simple s - t path π ( s, t ) in G of length at most (1 + (cid:15) )dist G ( s, t ) . • given a steadiness index j , the data structure returns σ ≤ j ( π ( s, t )) . We point out that the associated path π ( s, t ) is fixed after every update to make sure that thepath π ( s, t ) does not depend on steadiness threshold j . That is, regardless of which σ ≤ j ( π ( s, t ))is queried, the underlying path π ( s, t ) is always the same. This will be key for the correctnessof our flow estimators, as the threshold j will be chosen randomly, and we will then analyze theprobability of each edge on π ( s, t ) being in the set σ ≤ j ( π ( s, t )).For the rest of this chapter, we only refer to a single instance of a data structure as given inDefinition IV.1.3. We can thus reserve the variables β , SSSP π ( G, s, t, (cid:15), β ) and τ for this specificdata structure and denote throughout by T SSSP π ( m, n, W, τ, (cid:15), β, ∆ , ∆ ) the total update time ofthis data structure where G undergoes ∆ edge weight increases, and ∆ is defined to be the sum ofthe sizes all encodings of sets σ ≤ j ( π ( s, t )) that were queried for plus the number of queries (i.e. ∆ 81s the size of the query output where we say that a single bit is output if the output set is empty). W denotes the max-min weight ratio of vertex weights w ( v ).We later show that we can implement SSSP π ( G, s, t, (cid:15), β ) from the result in Theorem III.0.2rather straight-forwardly, but keep abstraction of Definition IV.1.3 to allow for future work to useour reductions. IV.2 A Roadmap to the Reductions Let us now give a brief overview of the reductions we require to obtain our result for the MaximumBounded Cost Flow (MBCF) problem. We remind the reader that we henceforth assume variousproperties of G as obtained by the reduction described in Proposition IV.1.2, in particular that G only has vertex capacities/costs.Our goal in this part is to computed the maximum feasible flow from s to t whose flow valuewe denote by OP T G,C . The final result we aim for in our reduction chain is a near-optimal flow;we restate the definition from the preliminaries. Definition IV.1.1. [Near-Optimality] For any (cid:15) > , given a graph G , source and sink vertices s, t ∈ V and a cost budget C , we say that a flow f is (1 − (cid:15) ) -optimal if the flow f is cost-feasibleand of value at least (1 − (cid:15) ) OP T G,C . While our final goal is to obtain a near-optimal flow, we will require a relaxation of this notionthroughout the algorithm to make progress. We therefore introduce the notion of a (1 − (cid:15) )-pseudo-optimal flow. This relaxation allows us to couple a pseudo-flow to a near-optimal flow. Definition IV.2.1 (Near-Pseudo-Optimality) . For any (cid:15) > , given a graph G , source and sinkvertices s, t ∈ V and a cost budget C , we say that a pseudo-flow ˆ f is a (1 − (cid:15) ) -pseudo-optimal flowif there exists a flow f such that1. f is a (1 − (cid:15) ) -optimal flow (see Definition IV.1.1), and2. ∀ v ∈ V : | in f ( v ) − in ˆ f ( v ) | ≤ (cid:15) · u ( v ) . In Section IV.3 we describe how to compute a (1 − (cid:15) )-pseudo-optimal pseudo-flow ˆ f using aPath-reporting SSSP data structure as described in Definition IV.1.3. This forms the centerpieceof our reduction. We therefore extend the powerful MWU framework by Garg and Koenemann[GK07] to work with random estimators. While this greatly speeds up the running time of thealgorithm, this will be at the cost of only producing a (1 − (cid:15) )-pseudo-optimal flow. The mainconcern with the (1 − (cid:15) )-pseudo-optimal flow is that after routing ˆ f , each vertex might have somesmall excess, i.e. the flow conservation constraint might be slightly violated at each vertex.Ideally, we could use repeated computations of near-pseudo-optimal flows to route the excesssince the excess vector is itself a demand vector that can be modeled as another instance of s − t flow. But the coupling guaranteed by Definition IV.2.1 is too weak on its own for this approach towork. We thus need something stronger. Instead of directly strengthening the coupling conditionguarantees Definition IV.2.1 tighter, we use a different approach: we "fit" the instance G to theflow. Note that the definition below is informal and we need some slightly stronger properties forthe reduction. Definition IV.2.2 (Informal) . For any (cid:15) > , given G = ( V, E, u, c ) and a cost budget C . Then,we say that a graph G = ( V, E, u , c ) is a (1 − (cid:15) ) - capacity-fitted instance derived from G if1. for every v ∈ V , u ( v ) ≤ u ( v ) , and2. we have P x ∈ V u ( x ) · c ( x ) ≤ · C , and . OP T G ,C ≥ (1 − (cid:15) ) · OP T G,C . Loosely speaking, the graph G in the above definition has the property that the optimal flowis close to saturating most edges in the graph. More formally, Property 2 says that in G even ifthe flow saturated every vertex, the total cost would still be at most 18 C .We will show that, rather surprisingly, using the intermediate of a capacity-fitted instance willyield a black box conversion from any algorithm for computing a (1 − (cid:15) )-approximate pseudo-optimalflow into an algorithm for computing a (1 − (cid:15) )-optimal flow. In particular, we first show in SectionIV.4 that repeated computation of pseudo-optimal flows will allow us to compute a capacity-fittedinstance G of G . We then show in Section IV.5 that once we have a capacity-fitted instance, wecan convert a near-pseudo-optimal flow into a near-optimal flow by using only a single call to abasic (1+ (cid:15) )-approximate max flow algorithm (only edge capacities, no costs), such as the algorithmin [She13, KLOS14, Pen16] .We summarize this roadmap by restating the reduction chain via Def. IV.1.3, Sec. IV.3 ================ ⇒ (1 − (cid:15) )-pseudo-optimal flow via Sec. IV.4 ======== ⇒ (1 − (cid:15) )-capacity-fitted instance via Approx. Max Flow, Sec. IV.5 ===================== ⇒ (1 − (cid:15) )-optimal flow . Finally, we point out that while Section IV.3 makes deliberate use of randomness, resulting in aMonte-Carlo algorithm, we will state the remaining reductions in deterministic fashion. Only atthe end, when combining the chain of reductions, we revisit the issue of success probability.Finally, combining all the reductions above, we have a reduction from the MBCF problem inany special instance that satisfies the properties of Proposition IV.1.2 to the Path-reporting SSSPdata structure from Definition IV.1.3. Since Proposition Proposition IV.1.2 then gives a reductionfrom any instance of MBCF to such a special instance and we showed in Part III how to constructthe desired data structure, we can plug in this data structure to obtain our near-optimal algorithmfor mixed-capacitated MBCF. We thus obtain the final min-cost flow algorithms of Theorems I.1.2and IV.0.1. See Section IV.6 for more details on how all the reductions fit together. IV.3 Near-pseudo-optimal MBCF via Path-reporting Decremen-tal SSSP The main result of this section is summarized in the theorem below. Theorem IV.3.1 (Near-pseudo-optimal MBCF) . Given graph G = ( V, E, c, u ) , a dedicated source s and sink t , some cost budget C , any < (cid:15) ≤ / , a positive integer τ = O (log n ) , and datastructure SSSP π ( G, s, t, (cid:15), β ) from Definition IV.1.3.Then, procedure NearPseudoOptMBCF ( G = ( V, E, c, u ) , s, t, (cid:15), τ, C ) given in Algorithm 10 re-turns ˆ f such that ˆ f scaled = ˆ f (1+10 (cid:15) ) log (cid:15) ( (cid:15)δ ) is a (1 − Θ( (cid:15) )) -pseudo-optimal flow. The algorithmruns in time ˜ O ( mβ · n /τ /(cid:15) ) + T SSSP π ( m, n, m /(cid:15) , τ, (cid:15), β, ∆ , ∆ ) where ∆ , ∆ = ˜ O ( mβ · n /τ /(cid:15) ) and runs correctly with probability at least − n − . We point out that we do not require [She13, KLOS14, Pen16] and could also devise a recursive scheme that invokesour own algorithm again. However, the reduction to approximate max flow with edge capacities is a significantlycleaner approach. 83e organize this section as follows: we first give some additional level of detail on the MBCFproblem by providing an LP and dual LP for the problem. Building upon this discussion, we thenintroduce the reader to Algorithm 10 and give an overview of the analysis. This gives a furtheroverview of the rest of the section which is dedicated to proving Theorem IV.3.1. IV.3.1 LP formulation of the Vertex-Capacitated MBCF Problem Let us now describe a linear program (LP) that captures the MBCF problem (here we alreadyassume w.l.o.g. that G is vertex-capacitated and has s and t of infinite capacity and zero cost).The LP is given in Program IV.1 where we denote by P s,t the set of all paths in G from s to t andby P v,s,t the set of all s to t paths that contain the vertex v ∈ V . We remind the reader that werestrict our attention to vertex-capacitated graphs w.l.o.g. by Proposition IV.1.2.maximize X p ∈ P s,t f p subject to X p ∈ P v,s,t f p ≤ u(v) ∀ v ∈ V \ { s, t } X p ∈ P s,t c ( p ) · f p ≤ Cf p ≥ ∀ p ∈ P s,t (IV.1)Observe that given a feasible solution { f p } to the LP, it is not hard to obtain a feasible flow f ofcost at most C as can be seen by setting f ( e ) = P p ∈ P e,s,t f p (the converse is true as well as can beseen from a flow decomposition). Throughout, we let OP T G,C refer to the maximum value of theobjective function (which is just the value of the flow f from s to t ) obtained as the maximum overall feasible solutions.We also state the dual to the LP IV.1:minimize X v ∈ V u ( v ) w v + Cϕ subject to X v ∈ p w v + ϕc ( v ) ≥ ∀ p ∈ P s,t w v ≥ ∀ v ∈ Vϕ ≥ w v , ϕ which are related to capacity and costbudget, such that the metric induced by function w ( x, y ) = w x + ϕ · c ( x ) over each edge ( x, y ) ∈ E ensures that any two vertices are at distance at least 1.In our analysis, we use weak duality to relate the two given LPs. Theorem IV.3.2 (see for example [BBV04] for a more general proof) . We have that for anyfeasible solution { f p } p to the primal LP IV.1, and any feasible solution { w e } e , ϕ to the dual LPIV.2, we have that X p ∈ P s,t f p ≤ X v ∈ V u ( v ) w v + Cϕ. IV.3.2 Algorithm and High-Level Analysis Our algorithm follows the high-level framework of Garg and Koenneman for computing a maximumflow [GK07], though with the crucial differences mention below. Although our write-up is entirelyself-contained, we recommend readers unfamiliar with the MWU framework to start with the paperof Garg and Koenneman [GK07], or with a more recent exposition in appendix C.1 of [CS20], which84 lgorithm 10: NearPseudoOptMBCF ( G = ( V, E, c, u ) , s, t, (cid:15), τ, C ) Input: A vertex-capacitated graph G = ( V, E, c, u ), two vertices s, t ∈ V , a cost function c and a capacity function u both mapping V → R > , an approximation parameter (cid:15) > 0, a positive integer τ = O (log n ) , β ≥ 1, and a cost budget C ∈ R + . ˆ f ← E ; δ = m − /(cid:15) ; ˆ ϕ ← δ/C ; Υ = n /τ ; ζ = 3860 log n · log (cid:15) ( (cid:15)δ ). foreach v ∈ V do ˆ w ( v ) ← δ/u ( v ). foreach e = ( x, y ) ∈ E do σ ( e ) ← j log Υ (cid:16) min { Cm · c ( y ) , u ( y )deg( y ) } / ( ζβ ) (cid:17)k . Start a data structure SSSP π ( G, s, t, (cid:15), β ) as described in Definition IV.1.3 on G fromsource vertex s with weight function ˜ w ( x ) = ˆ w ( x ) + d ˆ ϕ e (1+ (cid:15) ) · c ( x ), steadiness function σ and approximation parameter (cid:15) . /* Run the MWU algorithm from [CS20] on G where adding flow on edges isreplaced by randomly estimating the flow on each edge. */ while P v ∈ V u ( v ) · ˆ w ( v ) + C · ˆ ϕ < do Find a (1 + (cid:15) )-approximate shortest path π ( s, t ) with respect to ˜ w using SSSP π ( G, s, t, (cid:15), β ). Λ ← min { λ | σ λ ( π ( s, t )) = ∅} . // Λ is the minimum steadiness on π ( s, t ) . γ ← d Exp (log Υ) e . // Choose random steadiness threshold γ ./* The loop below will simulate adding Υ Λ to every edge on π ( s, t ) byupdating every edge with probability scaling in its steadiness andupdate flow according to the inverse of the probability. We estimatethe path cost similarly. */ ˆ c ← // Estimator only for c ( π ( s, t )) ./* We iterate over all low steadiness edges (we consider them asdirected on the flow path). */ foreach e = ( x, y ) ∈ σ ≤ Λ+ γ ( π ( s, t )) do ˆ f ( e ) ← ˆ f ( e ) + Υ σ ( e ) . // Add flow to flow estimator. ˆ w ( y ) ← ˆ w ( y ) · Exp (cid:16) (cid:15) Υ σ ( e ) u ( y ) (cid:17) . // Update weight. ˆ c ← ˆ c + c ( y ) · Υ σ ( e ) . // Update cost estimate. ˆ ϕ ← ˆ ϕ · Exp (cid:16) (cid:15) · ˆ cC (cid:17) . // Update the cost function. return ˆ f w over the verticeswhich is set to have very small values in the beginning and similarly assigns the ˆ ϕ variable as smallvalue.It then maintains an Path-reporting SSSP data structure on the graph G with weight function˜ w ( x ) ≈ ˆ w ( x ) + ˆ ϕ · c ( x ). Subsequently, the algorithm computes shortest-paths in metric inducedby ˜ w and increases flow along some edges on the shortest path. The combined flows will laterform the flow variables for the primal solution given in LP IV.1. Based on the flow updates, thealgorithm then increases ˜ w ( x ) for every vertex whose in-flow was increased. We point out thatin our algorithm, in stark contrast to previous algorithms, the flow is not directly added to theidentified shortest path but instead we only maintain a random estimator ˆ f ( e ) at each edge, thatestimates how much flow should have been added throughout the algorithm. Based on the valueof ˆ f ( e ) where e = ( x, y ), we increase ˆ w ( y ) which in turn increases ˜ w ( y ). This in turn impliesthat we do not route a lot of flow through y before ˜ w ( y ) becomes too large for y to appear on an(approximate) shortest path.Analyzing Algorithm 10 is rather involved since we have to combine the classic analysis of themultiplicative weight update framework for max flow and maximum bounded cost flow as given in[GK07, Fle00, Mad10, CK19, CS20] with some strong concentration bounds for the flow and thecost of the flow to get control over the heavy randomization we introduced. Notation for each Iteration. We use the notation that a variable in the algorithm used withsubscript i denotes the variable after the i th while-iteration in our algorithm. For example, ˆ f i denotes the (pseudo-)flow ˆ f after the i th iteration. An overview of variables with definitions isgiven in Table IV.1. We let k be the number of iterations of the while-loop (hence k is itself arandom integer). The Pseudo-Flow and the Real Flow. Recall that our final goal will be to show that a near-optimal pseudo-flow ˆ f , which is close to some near-optimal flow f (see Definition IV.2.1). The flow f that we will compare to is defined as follows. Let f be the flow that is obtained by routing duringeach iteration i th exactly Υ Λ i units of flow along the approximate shortest path π ( s, t ) i (i.e. everyedge receives exactly this amount of flow). Let again flow f i be the flow incurred by the pathschosen in the first i iterations. Note that although f obeys conservation constraints, still dependson ˆ f , because the path π ( s, t ) is defined using weights ˆ w , which is updated according to ˆ f . Comparison to the Previous Approach. In the framework of Garg and Koenneman [GK07],there is no pseudo-flow ˆ f . There is only the flow f , and the weight function w and cost function c are updates using f instead of ˆ f .Then, the key ideas are as follows. We first note that if we followed the pseudocode of Algorithm10 (but with f instead of ˆ f ), then the final flow f returned is not capacity feasible. However, itturns out that scaling the flow to obtain f scaled = f (1+10 (cid:15) ) log (cid:15) ( (cid:15)δ ) is sufficient to make it feasible.Intuitively, a vertex v starts with very small weight in the algorithm but every time that u ( v ) flowis added to the in-flow of v , the weight w ( v ) of the vertex is increased by a e (cid:15) ≈ (1 + (cid:15) ) factor (seeLine 13) and thus after ∼ log (cid:15) ( δ ) times that in-flow of roughly u ( v ) is added to v , the vertex v becomes too heavy in weight to appear on any shortest path and therefore no additional flow isadded to v and we only need to scale as pointed out above. A similar argument ensures that theflow f scaled is cost-feasible. 86o ensure that the flow f scaled is a flow of almost optimal flow value, Garg and Koennemannalways augment the flow along the currently shortest path π ( s, t ) with regard to the weight function˜ w (defined in Line 5 where c is the original cost of the edge). They then use that since the weight( w + ϕ ◦ c )( π ( s, t )) of the shortest path represents the left-hand side value of the most violatedconstraint in the dual LP IV.2, that scaling w and ϕ by 1 / ( w + ϕ ◦ c )( π ( s, t )) gives a feasiblesolution to the dual LP IV.2. Using weak duality as described in Theorem IV.3.2, it is thenstraight-forward to obtain that˜ w ( π ( s, t )) = ( w + ϕ ◦ c )( π ( s, t )) ≤ P v ∈ V u ( v ) w ( v ) + CϕOP T G,C Using this insight, Garg and Koenemann can upper bound the objective function ObjVal = P v ∈ V u ( v ) w ( v ) + Cϕ which serves as a potential function in the analysis and obtain a near-optimallower bound on the objective value in terms of the optimal solution. Fleischer [Fle00] later showedthat one can relax the requirement of using a shortest path to using only a (1 + (cid:15) )-approximateshortest path. Our Approach. We follow this fundamental approach of the original analysis, however, we onlyhave an estimator ˆ f of f and correspondingly only an estimator for ˜ w . Moreover, as mentionedabove, f actually depends on ˆ f because the shortest path π ( s, t ) i added to f i is defined in termsof weights ˜ w i − , which were induced by f i − . In order to analyze flow f , our goal will be to showthat before each iteration i , we have that ˜ w i − as induced by ˆ f i − is within a (1 + (cid:15) ) factor of˜ w i − as induced by f i − . Using some rather straight-forward arguments this implies that the nextapproximate shortest path π ( s, t ) i is (1 + (cid:15) ) -approximate with regard to the metric induced by˜ w i − as induced by f i − .To this end, we notice that in order to bound the difference in the resulting function ˜ w , we arerequired to show very strong concentration bounds to prove that | in f ( v ) − in ˆ f ( v ) | ≤ u ( v ) for each v ∈ V and | c ( f ) − c ( ˆ f ) | ≤ C . Using careful arguments, we can derive the required concentrationbounds. We can then finally use the concentration bounds to recover good guarantees for the flowestimator ˆ f that our algorithm returns by relating it back to f .In Section IV.3.3, we show that the returned flow estimator satisfies capacity- and cost-feasibility.We then show strong concentration bounds in Section IV.3.4. Finally, we combine these resultswhich allows us to carry out the analysis for the correctness of the algorithm following closely theapproach by Garg and Koenemann in Section IV.3.5. Finally, in Section IV.3.6, we bound the totalrunning time of Algorithm 10. IV.3.3 Capacity- and Cost-Feasibility of the Returned Flow after Scaling We start by proving that the flow estimator ˆ f after scaling is a capacity- and cost-feasible flow. Inorder to obtain this feasibility result, we first upper bound the maximum amount of flow addedto the in-flow of a vertex in a single while-loop iteration and analogously the maximum additionalcost we add to the flow. Claim IV.3.3. For any iteration ≤ i ≤ k , we have that | in ˆ f i ( v ) − in ˆ f i − ( v ) | ≤ u ( v ) /ζ ∀ v ∈ V (IV.3) | c ( ˆ f i ) − c ( ˆ f i − ) | = ˆ c i ≤ C/ζ. (IV.4) Proof. Equation (IV.3): In each iteration i , we add flow along a single β -edge-simple path π ( s, t ) i . Let some edge e = ( x, v ) be on π ( s, t ) i (possibly multiple times). Then, by the value of87 ( s, t ) i The (1 + (cid:15) )-approximate shortest path π ( s, t ) in the i th iteration of the while-loop in Line 6.Λ i The min-capacity on the path π ( s, t ) i found during the i th iteration. γ i The value of γ during the i th iteration.ˆ f i The total flow estimator after the i th iteration.ˆ c i The cost of the flow ˆ f i − ˆ f i − , added during iteration i .ˆ w i The function ˆ w after being updated using ˆ f i .ˆ ϕ i The function ˆ ϕ after being updated using ˆ f i .˜ w i The combined weight function obtained from ˆ w i and ˆ ϕ i . f i The flow obtained from routing Υ Λ j units of flow along each edge on π ( s, t ) j for each j ≤ i . k Number of iterations of the while-loop.Table IV.1: Variables depending on i , the iteration of the while-loop in Line 6.the steadiness of vertex v in Line 4, we have that σ ( e ) ≤ log Υ (cid:16) u ( v ) ζβ · deg( v ) (cid:17) . But this implies that foreach occurrence of e on the path π ( s, t ) i , it occurs once in the foreach-loop starting in Line 11 andthen we add flow at most Υ σ ( e ) ≤ u ( v ) / ( ζβ · deg( v )) to | in ˆ f i ( v ) − in ˆ f i − ( v ) | . But since by definition of β -edge-simple paths, the path π ( s, t ) i contains every such edge e atmost β time, and since there are at most deg( v ) such edges adding to the in-flow of v , the totalcontribution to the in-flow of v of π ( s, t ) i is at most u ( v ) / ( ζβ · deg( v )) · β · deg( v ) = u ( v ) /ζ . Equation (IV.4): First observe that at the beginning of every iteration of the while-loop ˆ c is initialized to 0, and whenever flow is added to ˆ f during iteration i in Line 12, we immediatelyadd the cost of the added flow in Line 14 to ˆ c . When the iteration terminates, we have that ˆ c i is equal to the cost of the flow added during the iteration i of the while-loop. Thus, the equality | c ( ˆ f i ) − c ( ˆ f i − ) | = ˆ c i holds.For the inequality ˆ c i ≤ C/ζ , observe that by definition of σ ( e ), for any edge e = ( x, y ) ∈ E ,we have that σ ( e ) ≤ log Υ (cid:16) Cmc ( y ) ζβ (cid:17) . Thus, every time the foreach-loop starting in Line 11 featuresthe edge e , it adds cost c ( y ) · Υ σ ( e ) ≤ c ( y ) · Cmc ( y ) ζβ = Cmβζ to ˆ c i . Since each edge ( x, y ) can occurat most β times on the path π ( s, t ) i (by definition of β -edge-simple paths), and since there can beat most m edges on the path, we can bound the total cost added by Cmβζ · βm = Cζ , as desired. Claim IV.3.4. The flow ˆ f scaled = ˆ f (1+10 (cid:15) ) · log (cid:15) ( (cid:15)δ ) returned by Algorithm 10 in Line 16 iscapacity- and cost-feasible.Proof. Capacity-feasible: Fixing a vertex v ∈ V \ { s, t } and a while-loop iteration i . Then, it isnot hard to see that we have ˆ w i ( v ) = δu ( v ) · Exp (cid:15) · in ˆ f i ( v ) u ( v ) ! . This follows since every time flow is added with v on the flow path, the function ˆ w ( v ) is multipliedby Exp (cid:16) (cid:15)Fu ( v ) (cid:17) where F is the amount of in-flow that is added to v due to the new flow path (byLine 2 and Line 13). 88urther, we claim that ˆ w ( v ) ≤ (1 + (cid:15) ) at the end of the algorithm. To see this observe first thatonce v has ˆ w ( v ) ≥ 1, the while-loop starting in Line 6, has its condition violated and therefore ends(here we use that u ( v ) ≥ w ( v ) < 1. Thus, only a single last path might further increase ˆ w ( v ). Letus assume that v is on the last path selected since otherwise we are done. But by Equation (IV.3),a single iteration can add at most u ( v ) /ζ to the in-flow of v .We therefore have that the final weight ˆ w k ( v ) is at mostˆ w k ( v ) ≤ ˆ w k − ( v ) · Exp (cid:18) (cid:15)u ( v ) ζ · u ( v ) (cid:19) < · e (cid:15) . Taking the logarithm on both sides, we obtainlog (cid:18) δu ( v ) (cid:19) + (cid:15) · in ˆ f ( v ) u ( v ) ≤ (cid:15) ⇐⇒ in ˆ f ( v ) ≤ u ( v ) · (cid:18) − log (cid:18) δu ( v ) (cid:19) /(cid:15) (cid:19) . It remains to observe that by assumption u ( v ) ≤ m (see proposition IV.1.2) and the definition of δ = m − /(cid:15) , 1 − log (cid:18) δu ( v ) (cid:19) /(cid:15) ≤ − log (cid:18) δm (cid:19) /(cid:15) = 1 + (1 + 5 (cid:15) ) log(1 /δ ) /(cid:15) ≤ (1 + 7 (cid:15) ) log (cid:18) δ (cid:19) /(cid:15) ≤ (1 + 10 (cid:15) ) · log (cid:15) (cid:18) (cid:15)δ (cid:19) where we use that log (cid:16) δm (cid:17) = log(1 /m ) + log( δ ) = (5 (cid:15) + 1) log( δ ), that log( x ) = − log(1 /x ), andthat 1 ≤ log( m ) = log(1 /δ ) /(cid:15) . The third inequality follows by a change of basis of the logarithmand the inequalities log( (cid:15)δ ) ≤ (1 + (cid:15) ) log( δ ), log(1 + (cid:15) ) ≥ (cid:15) and 1 + x ≤ e x ≤ x + x for x ≤ 1. This proves capacity-feasiblity for all v but for s and t , for which the capacities are ∞ byassumption, which implies that the claim for them is vacuously true. Cost-feasible: We observe that after iteration i , we haveˆ ϕ i = δC · Exp (cid:15) · P j ≤ i c ( ˆ f j − ˆ f j − ) C ! = δC · Exp (cid:15)c ( ˆ f i ) C ! . where we use that we do not cancel any flow between iterations to obtain the equality. UsingEquation (IV.4) in place of Equation (IV.3), we can follow the same proof template as for capacity-feasiblity to conclude the claim. IV.3.4 Strong Concentration Bounds Next, we would like to obtain strong concentration bounds for the difference between f and ˆ f . Tothis end, we use a result that is akin to Chernoff Bounds while allowing for some limited dependencebetween the random variables (in a Martingale fashion). Theorem IV.3.5 (see [KY14, CQ18]) . Let X , X , . . . , X k , Z , Z , . . . Z k ∈ [0 , W ] be random vari-ables and let (cid:15) ∈ [0 , / . Then, if for every i ∈ { , , . . . , k } , E [ X i | X , . . . , X i − , Z , . . . , Z i ] = Z i (IV.5) then for any δ > , we have that P "(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) k X i =1 X i − Z i (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ (cid:15) k X i =1 Z i + δ ≥ · (1 + (cid:15) ) − δ/W . k of iterations in Algorithm 10. Here we will usea crude upper bound of k = O ( n log ( m ) /(cid:15) ). This is straight-forward since in each iterationstarting in Line 6, we add a 1 / ( n /τ ζβ )-fraction of the capacity of the min-capacity vertex on theflow path to the in-flow of the vertex.We are now ready to prove the main result of this section. Claim IV.3.6. For any v ∈ V , we have P h ∃ ≤ i ≤ k, | in f i ( v ) − in ˆ f i ( v ) | ≥ u ( v ) i ≤ n − . Proof. We prove by induction on i . The base case i = 0, is true since f and ˆ f are initialized to 0.For the inductive step i − i for i ≥ 1, we start by defining the two random processes { X j = in ˆ f j ( v ) − in ˆ f j − ( v ) } j and { Z j = in f j ( v ) − in f j − ( v ) } j . Here, Z j is the amount of flow addedto the in-flow of v in f during the j th iteration of Line 6, while X j is the amount of flow added to thein-flow of v in ˆ f . Thus, since we never cancel flow over iterations, we have that P ij =1 Z i = in f i ( v )and P ij =1 X i = in ˆ f i ( v ).Now, the key statement that we need in order to invoke Theorem IV.3.5, is to prove ConditionIV.5. Therefore, observe that given Z j , the only randomness in determining X j stems from picking γ j in Line 9. Using the definition of expectation, we obtain that E [ X j | X , . . . , X j − , Z , . . . , Z j ] = X e =( x,v ) ∈ π ( s,t ) j P [ σ ( e ) ≤ Λ j + γ j ] · Υ σ ( e ) . This follows since the algorithm adds Υ σ ( e ) units to X j if the random threshold makes the sumΛ j + γ j larger than the steadiness threshold σ ( e ) for every edge e on π ( s, t ) j that enters v . We canthen use the definition of the exponential distribution coordinate-wise which gives that P [ σ ( e ) ≤ Λ j + γ j ] = P [ σ ( e ) − Λ j ≤ γ j ] = 1 − (1 − e − log Υ · ( σ ( e ) − Λ j ) ) = Υ Λ j − σ ( e ) .But this implies that E [ X j | X , . . . , X i − , Z , . . . , Z j ] = X e =( x,v ) ∈ π ( s,t ) j Υ Λ j − σ ( e ) · Υ σ ( e ) = X e =( x,v ) ∈ π ( s,t ) j Υ Λ i = Z i . Finally, we can invoke Theorem IV.3.5 where we plug in η = · (1+10 (cid:15) ) · log (cid:15) ( (cid:15)δ ) and δ = u ( v ) / (cid:15) and η P (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) i X j =1 X j − Z j (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ η i X j =1 Z j + u ( v ) / ≥ · (1 + η ) − δ/W . We then observe that the random variables are bounded by W ≤ u ( v ) /ζ by Equation (IV.3), so2 · (1 + η ) − ( u ( v ) / /W ≥ · (1 + η ) − ζ/ ≥ n − . At the same time, we have that 4 η P ij =1 Z j ≤ · u ( v )since ˆ f i is capacity-feasible after scaling by 4 η as shown in Claim IV.3.4 and | in ˆ f i − ( v ) − in f i − ( v ) |
We have P h ∃ v ∈ V, ∃ ≤ i ≤ k, | in f i ( v ) − in ˆ f i ( v ) | ≥ u ( v ) i ≤ n − . Following the proof template for the concentration bounds on the flow on each edge, we can getsimilar concentration bounds on the cost of the flow.90 laim IV.3.8. For any ≤ i ≤ k , we have P h | c ( f i ) − c ( ˆ f i ) | ≥ C i ≤ n − . Proof. Consider the random processes { X j = c ( ˆ f j ) − c ( ˆ f j − ) } j and { Z j = c ( f j ) − c ( f j − ) } j . Next,observe that for j ≥ 1, we have by definition that Z j = c ( f j ) − c ( f j − ) = P v ∈ π ( s,t ) j c ( v )Υ Λ j (recallthat we assume c ( s ) = 0, and that π ( s, t ) j is a multi-set). Further, we have that E [ X j | X , . . . , X i − , Z , . . . , Z j ] = X e =( x,v ) ∈ π ( s,t ) j c ( v ) · P [ σ ( e ) ≤ Λ j + γ j ] · Υ σ ( e ) again since γ j is the only random variable not conditioned upon that determines X j . But it isstraight-forward to see that the right-hand side is exactly Z j , using again the definition of theexponential distribution. Finally, we use this claim in an induction on i , to invoke at each iterationstep Theorem IV.3.5 with the same parameters as chosen above and carefully take a union bound.This concludes the proof.We henceforth condition on Claims IV.3.7 and IV.3.8 holding true and treat them like deter-ministic results. IV.3.5 Correctness of the Algorithm We can now use the results from the previous sections to conclude that our algorithm returns thecorrect solution with high probability. We start by showing that the flow f is a near-optimal flowand then proceed by coupling f and ˆ f . Claim IV.3.9. The flow f scaled = f (1+24 (cid:15) ) log (cid:15) ( (cid:15)δ ) is a capacity-feasible and satisfies flow conser-vation constraints.Proof. We have that since f scaled is the weighted sum of s -to- t paths that the flow conservationconstraints are satisfied. To see that f scaled is capacity feasible, observe that for each v ∈ V ,in f ( v )(1 + 24 (cid:15) ) log (cid:15) (cid:16) (cid:15)δ (cid:17) ≤ in ˆ f ( v ) + u ( v )(1 + 24 (cid:15) ) log (cid:15) (cid:16) (cid:15)δ (cid:17) ≤ in ˆ f ( v )(1 + 24 (cid:15) ) log (cid:15) (cid:16) (cid:15)δ (cid:17) + (cid:15)u ( v )(1 + (cid:15) ) ≤ in ˆ f ( v )(1 + 2 (cid:15) )(1 + 10 (cid:15) ) log (cid:15) (cid:16) (cid:15)δ (cid:17) + (cid:15)u ( v ) ≤ (1 − (cid:15) ) · in ˆ f ( v )(1 + 10 (cid:15) ) log (cid:15) (cid:16) (cid:15)δ (cid:17) + (cid:15)u ( v ) ≤ u ( v )where we use Corollary IV.3.7 in the first inequality, and in the second inequality that log (cid:15) (cid:16) (cid:15)δ (cid:17) =log (cid:15) (cid:16) (cid:15)m − /(cid:15) (cid:17) ≥ /(cid:15) , in the third and forth inequality, we used 1 + x ≤ e x ≤ x + x (for x ≤ f (after scaling) as established inClaim IV.3.4, we have in ˆ f ( v )(1 + 10 (cid:15) ) log (cid:15) (cid:16) (cid:15)δ (cid:17) ≤ u ( v ) . emma IV.3.10. The flow f scaled = f (1+24 (cid:15) ) log (cid:15) ( (cid:15)δ ) is a (1 − Θ( (cid:15) )) -optimal flow.Proof. We have feasibility of f scaled by Claim IV.3.9.It remains to prove that the flow value F of f scaled is at least (1 − (cid:15) ) OP T G,C . To this end, letus define the functions w i ( v ) = δu ( v ) · Exp (cid:18) (cid:15) · in f i ( v ) u ( v ) (cid:19) ϕ i = δC · Exp (cid:18) (cid:15) · c ( f i ) C (cid:19) ObjVal i = X v ∈ V u ( v ) · w i ( e ) + C · ϕ i for all 0 ≤ i ≤ k . Here, we define w i to be the weight function that would result if we would alwaysuse the flow f up to update vertex weights instead of the flow estimator ˆ f as is the case for ˆ w i .Analogously, ϕ is the version of ˆ ϕ that is based on f instead of ˆ f and ObjVal is the resultingobjective value corresponding to the sum we use in the while-loop condition in Line 6.We start by establishing a useful claim that relates these versions based on the flow f insteadof ˆ f tightly together. Claim IV.3.11. We have for any ≤ i ≤ k , we have that ∀ v ∈ V, (cid:15) ) ˆ w i ( v ) ≤ w i ( v ) ≤ (1 + 2 (cid:15) ) ˆ w i ( v ) , and (cid:15) ) ˆ ϕ i ≤ ϕ i ≤ (1 + 2 (cid:15) ) ˆ ϕ i . Proof. We observe that by Corollary IV.3.7, we have | in f i ( v ) − in ˆ f i ( v ) | < u ( v ) for every v ∈ V , andtherefore we have ˆ w i ( v ) ≤ δu ( v ) Exp (cid:15) · in ˆ f i ( v ) u ( v ) ! ≤ δu ( v ) Exp (cid:18) (cid:15) (in f i ( v ) + u ( v )) u ( v ) (cid:19) ≤ δu ( v ) (1 + 2 (cid:15) ) Exp (cid:18) (cid:15) in f i ( v ) u ( v ) (cid:19) = (1 + 2 (cid:15) ) w i ( v ) . (IV.6)where we use for the inequality that e x ≤ x + x for x ≤ 1, and x ≤ x for x ≤ 1. Theremaining inequality statements can be proven by following this template and using the additionalClaim IV.3.8.Observe that for any i ≥ 1, for every v ∈ V that occurs β v times on the path (minus one if v = s ) that w i ( v ) = δu ( v ) · Exp (cid:15) in f i − ( v ) u ( v ) ! · Exp (cid:15) (cid:0) in f i ( v ) − in f i − ( v ) (cid:1) u ( v ) ! ≤ δu ( v ) · Exp (cid:15) in f i − ( v ) u ( v ) ! · (cid:16) (cid:15) + (cid:15) (cid:17) · β v · Υ Λ i u ( v ) ! = w i − ( v ) + δ · Exp (cid:15) in f i − ( v ) u ( v ) ! · (cid:16) (cid:15) + (cid:15) (cid:17) · β v · Υ Λ i = w i − ( v ) + (cid:0) (cid:15) + (cid:15) (cid:1) · β v · Υ Λ i u ( v ) · w i − ( v ) (IV.7)92here we use for the first inequality that e x ≤ x + x for x ≤ (cid:15)/ζ by Equation (IV.3)) and that β v · Υ Λ i = in f i ( v ) − in f i − ( v ) by definition of f . We then rearrange terms to obtain the equalities.For ϕ i , we can argue similarly that ϕ i ≤ Cδ · Exp (cid:18) (cid:15) · c ( f i − ) C (cid:19) · (cid:18) (cid:16) (cid:15) + (cid:15) (cid:17) · c ( f i ) − c ( f i − ) C (cid:19) ≤ ϕ i − + (cid:0) (cid:15) + (cid:15) (cid:1) · Υ Λ i · c ( π ( s, t ) i ) C · ϕ i − (IV.8)where we use that the difference in the cost between flows f i and f i − is the cost of the path π ( s, t ) i times the value of the flow we send in iteration i which is Υ Λ i . We further use EquationIV.4 to ensure that we use inequality e x ≤ x + x with x ≤ 1. The last inequality again usesClaim IV.3.11.Combined, we obtain that ObjVal i = X v ∈ V u ( v ) · w i ( v ) + C · ϕ i ≤ ObjVal i − + X v ∈ V (cid:16) (cid:15) + (cid:15) (cid:17) · β v · Υ Λ i · w i − ( v ) + (cid:16) (cid:15) + (cid:15) (cid:17) · Υ Λ i · c ( π ( s, t ) i ) · ϕ i − = ObjVal i − + (cid:16) (cid:15) + (cid:15) (cid:17) · Υ Λ i · ( w i − + ϕ i − ◦ c ) ( π ( s, t ) i ) . (IV.9)Let w = w i − + ϕ i − ◦ c , we observe that the distance dist w ( s, t ) from s to t in G , weighted byfunction w , satisfies dist w ( s, t ) ≤ ObjVal i − OP T G,C . This follows since scaling w i − and ϕ i − by 1 / dist w ( s, t ) makes them a feasible solution to the dualLP given in Equation (IV.2). Since it is a feasible solution to a minimization problem, we havethat ObjVal i − dist w ( s, t ) ≥ OP T G,C where we used weak duality as stated in Theorem IV.3.2 for the inequality to further lower boundthe optimal value to the dual LP by the optimal value of the primal LP. Multiplying both sides bydist w ( s, t ) and dividing by OP T G,C proves the statement.Finally, we observe that the selected path π ( s, t ) i , is a (1 + 2 (cid:15) ) -approximate shortest s to t path with respect to w . This follows since π ( s, t ) i is selected to be a (1 + (cid:15) )-approximate shortestpath in the metric determined by weight function ˜ w i − by the definition of the SSSP data structure SSSP π ( G, s, t, (cid:15), β ). Further, ˜ w i − is a (1 + (cid:15) ) approximation of the metric induced by the weightfunction ( ˆ w i − + ˆ ϕ i − ◦ c ) (as can be seen from the rounding of ˆ ϕ described in Line 5). Finally,( ˆ w i − + ˆ ϕ i − ◦ c ) is a (1 + 2 (cid:15) ) -approximation of w by Claim IV.3.11.It remains to put everything together: from the combination of Equation (IV.9) and the pathapproximation, we obtain that ObjVal i ≤ ObjVal i − + (cid:16) (cid:15) + (cid:15) (cid:17) · Υ Λ i (1 + 2 (cid:15) ) · ObjVal i − OP T G,C (IV.10) ≤ ObjVal i − · Exp (cid:0) (cid:15) + (cid:15) (cid:1) · (1 + 2 (cid:15) ) OP T G,C · Υ Λ i ! . (IV.11)93e finally observe that by the while-loop condition, we have that after the last iteration, wehave that ObjVal k ≥ 1. Since ObjVal ≥ δm , we therefore have that1 ≤ ObjVal k ≤ δm · Exp (cid:0) (cid:15) + (cid:15) (cid:1) · (1 + 2 (cid:15) ) OP T G,C · k X i =1 Υ Λ i ! ⇐⇒ ≤ log( δm ) + (cid:0) (cid:15) + (cid:15) (cid:1) · (1 + 2 (cid:15) ) OP T G,C · k X i =1 Υ Λ i ! ⇐⇒ k X i =1 Υ Λ i ≥ log( mδ ) · OP T G,C ( (cid:15) + (cid:15) ) · (1 + 2 (cid:15) ) Noticing that the value of the flow f is exactly F = P ki =1 Υ Λ i and therefore the flow value of f scaled is f scaled = F (1+24 (cid:15) ) log (cid:15) ( (cid:15)δ ) , we have f scaled ≥ log( mδ ) · OP T G,C ( (cid:15) + (cid:15) ) · (1 + 2 (cid:15) ) · (cid:15) ) log (cid:15) (cid:16) (cid:15)δ (cid:17) = (1 − (cid:15) ) log(1 /δ ) · OP T G,C ( (cid:15) + (cid:15) ) · (1 + 2 (cid:15) ) · log(1 + (cid:15) )(1 + 24 (cid:15) ) log (cid:16) (cid:15)δ (cid:17) ≥ (1 − (cid:15) ) · OP T G,C · log(1 + (cid:15) )( (cid:15) + (cid:15) ) (1 + 24 (cid:15) ) ≥ (1 − (cid:15) ) · (cid:15) · OP T G,C ( (cid:15) + (cid:15) ) (1 + 24 (cid:15) ) ≥ (1 − (cid:15) ) OP T G,C (1 + 24 (cid:15) ) ≥ (1 − (cid:15) )(1 − (cid:15) ) OP T G,C ≥ (1 − (cid:15) ) OP T G,C where we use that δ = m − /(cid:15) such that in the first equality we can use log( mδ ) = log(1 /δ ) − log( m ) = (1 − (cid:15) ) log(1 /δ ), and for the second term that we can change basis of the logarithm usinglog (cid:15) ( x ) = log( x )log(1+ (cid:15) ) . We then obtain the second inequality using log( (cid:15)δ ) ≤ (1 + (cid:15) ) log( δ ), thethird inequality using log(1 + (cid:15) ) ≥ (cid:15) . In the final two inequalities, we use that for | x | < 1, we have(1 + x ) n ≤ (1 + nx ), from the Taylor series of 1 / (1 + x ), we obtain 1 / (1 + x ) ≥ (1 − x ), and finallywe use that 1 + x + x ≥ e x ≥ x combined with the fact that (1 − (cid:15) + 384 (cid:15) ) ≥ (1 − (cid:15) )using our assumption that (cid:15) ≤ / f after scaling is a (1 − (cid:15) )-pseudo-optimal flow. This proves thecorrectness of Theorem IV.3.1. Corollary IV.3.12. The flow ˆ f scaled = ˆ f (1+10 (cid:15) ) log (cid:15) ( (cid:15)δ ) is a (1 − Θ( (cid:15) )) -pseudo-optimal flow.Proof. Combining Lemma IV.3.10 with Claim IV.3.4 immediately gives the Corollary. IV.3.6 Runtime Complexity of the Algorithm Next, we bound the runtime of the algorithm. In this section, we use the fact that U ≤ m ≤ n by Proposition IV.1.2. This ensures that every edge e has steadiness σ ( e ) ∈ [1 , τ ] because σ ( e ) ≤ log n /τ ( u ( v )) = log u ( v )log n /τ ≤ τ · log n log n = τ .We start by giving an upper bound on the number of times that we enter the foreach-loop inLine 11. Claim IV.3.13. The total number of edges e = ( x, y ) that are looked at in the foreach-loop inLine 11, over the entire course of Algorithm 10, is at most O ( m log( m ) ζβ Υ /(cid:15) ) = O ( m log ( m ) · β · n /τ /(cid:15) ) . roof. We observe that for any edge e = ( x, v ) ∈ E , upon entering the foreach-loop, we add Υ σ ( e ) units of flow to ˆ f ( e ). Recall that σ ( e ) = j log Υ (cid:16) min { Cmc ( v ) , u ( v )deg( v ) } (cid:17) / ( ζβ ) k . We distinguish twocases:• if σ ( e ) = j log Υ (cid:16) Cmc ( v ) (cid:17) / ( ζβ ) k : then upon adding Υ σ ( e ) units of flow to ˆ f ( e ), we increase thecost of c ( ˆ f ) by at least c ( v ) · Υ j log Υ (cid:16) Cmc ( v ) (cid:17) / ( ζβ ) k ≥ c ( v ) · Υ log Υ (cid:16) Cmc ( v ) (cid:17) / ( ζβ ) − = c ( v ) · Cmc ( v ) ! / ( ζβ Υ) = Cm · ζβ Υ . But since we have by Lemma IV.3.10 in combination with Claim IV.3.8 that c ( ˆ f ) = O ( C · log m/(cid:15) ) and since the cost is monotonically increasing over time (because the algorithm nevercancels flow), there are at most O ( m log( m ) ζβ Υ /(cid:15) ) such iterations.• otherwise, we have σ ( e ) = j log Υ ( u ( v )deg( v ) / ( ζβ )) k : but this implies that we increase the in-flowto v by at least u ( v )deg( v ) / ( ζβ Υ) (by the same argument as above). On the other hand, byClaim IV.3.4, we have that ˆ f scaled is a capacity-feasible flow. Thus, we have for every vertex v ∈ V , in ˆ f ( v ) ≤ u ( v ) · (1 + 10 (cid:15) ) log (cid:15) (cid:16) (cid:15)δ (cid:17) = O ( u ( v ) · log m/(cid:15) ). Therefore, for any vertex v ,there are at most O (deg( v ) · log( m ) ζβ Υ /(cid:15) ) such iterations.Thus, combining the two cases, we can bound the number of iterations by O ( m log( m ) ζβ Υ /(cid:15) ) andplugging in the values for ζ and Υ gives the result.We can now establish the running time stated in Theorem IV.3.1. Claim IV.3.14. The total running time of Algorithm 10 can be bound by ˜ O ( mβ · n /τ /(cid:15) ) + T SSSP π ( m, n, m /(cid:15) , τ, (cid:15), β, ∆ , ∆ ) for ∆ , ∆ = ˜ O ( mβ · n /τ /(cid:15) ) .Proof. We start by observing that up to Line 5, the algorithm uses time O ( m ). Henceforth, wedo not account for the running time used by the data structure but rather only keep track of thenumber of updates ∆ and the number of queries plus the size of the output of the query ∆ .When we enter the while-loop, we find the current approximate shortest path from s to t usingthe data structure and find the smallest steadiness class σ λ ( π ( s, t )) that is non-empty. We notethat we do not compute the path π ( s, t ) explicitly but rather query σ ≤ ( π ( s, t )) , σ ≤ ( π ( s, t )) , . . . , σ ≤ Λ ( π ( s, t ))until we find the first class that is non-empty (and there always exists such a class). We then selecta random threshold γ ≥ | σ ≤ Λ+ γ ( π ( s, t )) | .Since steadiness classes are nesting, we have that σ ≤ Λ ( π ( s, t )) ⊆ σ ≤ Λ+ γ ( π ( s, t )). Since every otheroperation in the while-loop iteration is a constant time operation, the overall running time for asingle iteration of the while loop is at most O ( | σ ≤ Λ+ γ ( π ( s, t )) | + τ ). (The additive + τ comes fromthe fact that in Line 8 of 10, the algorithm might go through at most τ steadiness values λ beforeit find one with σ λ ( π ( s, t )) = ∅ .)Using Claim IV.3.13, we thus obtain that the total running time of the algorithm excluding thetime spent by the data structure can be bound by˜ O ( mτ β · n /τ /(cid:15) )95e further observe that such a while-loop iteration adds at most O ( | σ ≤ Λ+ γ ( π ( s, t )) | + τ ) to thequery parameter ∆ . Note that Claim IV.3.13 upper bounds the sum of O ( | σ ≤ Λ+ γ ( π ( s, t )) | ) overall foreach-loop iterations and thereby over all path-queries that return a non-empty set of edges.At the same time, Claim IV.3.13 is also a trivial bound on the number of while-loop iterations(since we always visit the foreach-loop in the while-loop at least once). Since each such while-loopiteration contributes at most O ( τ ) queries which return an empty set of edges, we can finally bound∆ by ˜ O ( mτ β · n /τ /(cid:15) )).We can now also bound ∆, the number of updates to the weight function ˜ w ( x ) = ˆ w ( x )+ d ˆ ϕ e (1+ (cid:15) ) · c ( x ) (as defined in Line 5). To this end, we observe that ˜ w ( x ) is updated either if ˆ w ( x ) or if d ˆ ϕ e (1+ (cid:15) ) is increased. But the former updates can be upper bounded by O (∆ ) since each such update resultsfrom a single edge in the query. For the number of updates caused by d ˆ ϕ e (1+ (cid:15) ) , we observe thateach increase of d ˆ ϕ e (1+ (cid:15) ) results in m updates to ˜ w . However, since we round ˆ ϕ to powers of (1 + (cid:15) ),we can bound the total number of increases of d ˆ ϕ e (1+ (cid:15) ) by O (log (cid:15) ( δ )) = O (log m/(cid:15) ). Combined,we obtain ∆ = ˜ O ( mτ β · n /τ /(cid:15) ).For the claim, it remains to use the assumption that τ = O (log n ). IV.4 Near-capacity-fitted instance via Near-pseudo-optimal MBCF We now build upon Theorem IV.3.1 to obtain Near-capacity-fitted instances. We start by makingthe definition of such an instance formal. Definition IV.4.1 (Edge-Split Transformation) . Given a flow instance G = ( V, E, c, u ) , we let G = Edge-Split ( G ) denote the instance derived from G by splitting every edge e = ( x, y ) in G into two edges ( x, v e ) and ( v e , y ) where v e is a new vertex added to G . The capacity u ( v e ) ofeach such v e ∈ V ( G ) \ V is set to U (the max capacity of G ), and its cost c ( v e ) to . For all v ∈ V ( G ) ∩ V , we set u ( v ) = u ( v ) and c ( v ) = c ( v ) . Here, we note that if G is derived from G as proposed above, and G was derived using Propo-sition IV.1.2, then also G satisfies the properties in Proposition IV.1.2 as can be verified straight-forwardly (except that the number of edges and vertices increases by m ). Definition IV.4.2 (Near-capacity-fitted instance) . For any < (cid:15) < , given graph G = ( V, E, u, c ) and a cost budget C . Let G = ( V , E , u , c ) be the graph defined by G = Edge-Split ( G ) . Then,we say that a graph G = ( V , E , u , c ) is a (1 − (cid:15) ) - capacity-fitted instance derived from G if:1. for every v ∈ V , u ( v ) ≤ u ( v ) , and2. we have for each v ∈ V , that P x ∈N G ( v ) u ( x ) ≤ · u ( v ) , where N G ( v ) = { x ∈ V | ( x, y ) ∈ E } and3. we have P x ∈ V u ( x ) · c ( x ) ≤ · C , and4. OP T G ,C ≥ (1 − (cid:15) ) · OP T G,C . Intuitively, the first Property ensures that every flow in G is capacity-feasible in G . At thesame time Property 2 ensures that for every original vertex in V , the vertices in its neighborhoodhave capacity ∼ u ( v ). Recall that these neighbors in G are the vertices resulting from edge-splitsof edges incident to v in G . This property will later be helpful to argue not only about in f ( v ) ofsome flow f in G but also about out f ( v ) by using the guarantees of a (1 − (cid:15) )-pseudo-optimal flowon the neighborhood of v . Property 3 ensures that any capacity-feasible flow f in G will not havelarge cost (w.r.t C ). Thus, scaling such f by (cid:15) will imply that it is cost-feasible even in G . Finally,we ensure in Property 4 that G still contains a large valued feasible flow.We can now formally state the main result of this section.96 emma IV.4.3. [Near-capacity-fitted instance via Near-pseudo-optimal MBCF] Given any <(cid:15) < , given a graph G = ( V, E, c, u ) , a dedicated source s and sink t , a cost bound C . Additionally,let there be an algorithm A that computes a (1 − (cid:15) ) -pseudo-optimal flow ˆ g in total update time T P seudoMBCF ( m, n, (cid:15), C ) .Then, there exists an algorithm B that computes a (1 − (cid:15) ) -capacity-fitted instance G in time ˜ O ( m + T P seudoMBCF ( m, n, Θ( (cid:15)/ log n ) , C )) . Algorithm 11: NearFeasibleCostFeasibleMBCF ( G, s, t, (cid:15), C, U ) Input: A graph G = ( V, E, c, u ), two dedicated vertices s, t ∈ V , a cost function c and acapacity function u (both mapping edges in E to positive reals), an approximationparameter 0 < (cid:15) < / 64, a cost budget C ∈ R + and a real U ∈ [ OP T G,C / , OP T G,C ]. Also, an algorithm A that computes a(1 − (cid:15) )-pseudo-optimal flow. ( V , E , c , u ) ← Edge-Split ( G ). j max ← b log (2 − (cid:15) ) (2 m /(cid:15) ) c . (cid:15) = (cid:15) · j max foreach v ∈ V do u ( v ) ← min { u ( v ) , U } . for j = 0 , , . . . , j max do g j ← A (( V , E , c , u j ) , s, t, (cid:15) , C ). ∀ v ∈ V , u j +1 ( v ) ← ( u j ( v ) / g j ( v ) ≤ u j ( v ) / u j ( v ) otherwise . /* Return capacity-fitted instance. */ return G = ( V , E , c, u j max +1 )For simplicity, we assume for the rest of the section that we have a -approximation U of thevalue of the optimal MBCF solution, i.e. U ∈ [ OP T G,C / , OP T G,C ]. This guess can later beremoved by guessing values for U at the cost of a multiplicative O (log n ) factor (recall that U ≤ m by Proposition IV.1.2).We present B in Algorithm 11. The main idea behind the algorithm is to apply a techniquethat we call capacity fitting . Loosely speaking, we halve the capacity of every vertex for which thein-flow given by A is smaller-equal to half its capacity. Thus, every iteration, we roughly half thecapacity of all vertices until the flow has to use a constant fraction of the capacity of each vertex.We now prove simple claims which will then allow us to conclude Lemma IV.4.3 straight-forwardly.We start by the most important claim, that right away shows that even filling all edges in thegraph G with flow will not induce cost far beyond the cost budget C . Claim IV.4.4. For any ≤ j ≤ j max , X v ∈ V c ( v ) · u j ( v ) ≤ m (2 − (cid:15) ) j · C/(cid:15) + 10 C. In particular, we have P v ∈ V c ( v ) · u j max ( v ) ≤ C .Proof. We prove the claim by induction. For the base case j = 0, we observe that every vertex v ∈ V has cost c ( v ) at most m and u ( v ) = u ( v ) ≤ m (see Proposition IV.1.2). Since there are97nly m + n ≤ m vertices in G , we can therefore deduce P v ∈ V c ( v ) · u ( v ) ≤ m · m and wefinally use that C ≥ j j + 1 for j ≥ 0: We observe that by the inductionhypothesis, we have that: X e ∈ V c ( v ) · u j ( v ) ≤ m (2 − (cid:15) ) j · C/(cid:15) + 10 C. (IV.12)We recall that in the j th iteration of the for-loop starting in Line 5, we invoke algorithm A to definethe function u j +1 based on the near-pseudo-optimal-flow g j . We observe that by assumption on A and Definition IV.2.1, there is a near-optimal flow f j such that | in g j ( v ) − in f j ( v ) | ≤ (cid:15) · u j ( v ) for allvertices v ∈ V , and c ( f j ) ≤ C for the given instance.But this implies that c ( g j ) ≤ (cid:15) · X v ∈ V c ( v ) · u j ( v ) + C ≤ m (2 − (cid:15) ) j · C + (1 + 10 (cid:15) ) C. (IV.13)To avoid clutter, we define T = m (2 − (cid:15) ) j · C + (1 + 10 (cid:15) ) C for further use. We then note that thecapacity u j +1 ( v ) of every vertex v becomes u j ( v ) / g j ( v ) less than half ofits capacity u j ( v ). However, by the upper bound on the cost of g j , we have that the vertices thatcontain greater-equal to half of their capacity in flow satisfy X v ∈ V, in gj ( v ) ≥ u j ( v ) / c ( v ) · u j ( v ) ≤ T. This inequality follows from the combination of two facts. The first is that the LHS of the inequalityis at most 2 c ( g ), because the LHS only considers vertices v ∈ V through which g j sends at least u j ( v ) / c ( g ) ≤ T , as shown in Equation IV.13.Since for the rest of the vertices, the capacity is halved in u j +1 , we have X v ∈ V c ( v ) · u j +1 ( v ) ≤ T + P v ∈ V c ( v ) · u j ( v )2 (IV.14) ≤ m (2 − (cid:15) ) j · C + (2 + 20 (cid:15) ) C + 2 m (2 − (cid:15) ) j · · C/(cid:15) + 5 C (IV.15)= (1 + 4 (cid:15) ) · m (2 − (cid:15) ) j · · C/(cid:15) + (7 + 20 (cid:15) ) C (IV.16) ≤ m (2 − (cid:15) ) j · · (1 − (cid:15) ) · C/(cid:15) + (7 + 20 (cid:15) ) C (IV.17) < m (2 − (cid:15) ) j +1 · C/(cid:15) + 10 C (IV.18)where we use Equation (IV.12) and the definition of T to get (IV.14) = ⇒ (IV.15), then rearrangeterms and use 1 + x ≤ e x ≤ x + x for x ≤ ⇒ (IV.17). In the finalinequality, we use our assumption (cid:15) < / Claim IV.4.5. For every vertex v ∈ V , any ≤ j ≤ j max , X x ∈N G ( v ) u j ( x ) ≤ m (2 − (cid:15) ) j · U /(cid:15) + 10 u ( v ) . In particular, we have P x ∈N G ( v ) u j max ( x ) ≤ · u ( v ) . G in the final instance G .Since the claim below is straight-forward to obtain but tedious to derive, we defer its proofs toAppendix A.4.3. Claim IV.4.6. Define G j = ( V , E , c , u j ) to be the graph that A is invoked upon during the j th iteration of the for-loop starting in Line 5. Then, we have that for every j ≥ , OP T G j +1 ,C ≥ (cid:0) − (cid:15) (cid:1) OP T G j ,C . In particular, we have OP T G jmax ,C ≥ (1 − (cid:15) ) j max OP T G ,C ≥ (1 − (cid:15) ) OP T G ,C . We can now prove Lemma IV.4.3. Lemma IV.4.3. [Near-capacity-fitted instance via Near-pseudo-optimal MBCF] Given any <(cid:15) < , given a graph G = ( V, E, c, u ) , a dedicated source s and sink t , a cost bound C . Additionally,let there be an algorithm A that computes a (1 − (cid:15) ) -pseudo-optimal flow ˆ g in total update time T P seudoMBCF ( m, n, (cid:15), C ) .Then, there exists an algorithm B that computes a (1 − (cid:15) ) -capacity-fitted instance G in time ˜ O ( m + T P seudoMBCF ( m, n, Θ( (cid:15)/ log n ) , C )) . Proof. Let us first argue about correctness by establishing the properties claimed in Definition IV.4.2.It is immediate to see that u ( v ) = u j max +1 ( v ) ≤ u ( v ) since our algorithm only decreases capacities.By Claim IV.4.4, we also have the second property of a near-capacity-fitted instance satisfied, andby Claim IV.4.5 the third property. Finally, observe that by Claim IV.4.6, we immediately obtainthat OP T G ,C ≥ (1 − (cid:15) ) OP T G ,C = (1 − (cid:15) ) OP T G,C (where we use that OP T G ,C = OP T G ,C sincewe they only differ in capping the capacities at 2 U which does not affect the maximum value ofany flow by definition of U ). It remains to bounds the running time of Algorithm 11 which can beseen by straight-forward inspection of the algorithm to be O ( m log n ) plus O (log n ) invocations of A (here, we also assume that (cid:15) > /n ). IV.5 Near-Optimal MBCF via Near-pseudo-optimal MBCF in aNear-capacity-fitted instance Finally, we show how to obtain a near-optimal flow from a near-pseudo-optimal flow in a near-capacity-fitted instance. Theorem IV.5.1. For any (cid:15) > , given a graph G = ( V, E, c, u ) that satisfies the propertiesof Proposition IV.1.2, a dedicated source s and sink t and a cost budget C . Given an algorithm A that computes a (1 − (cid:15) ) -pseudo-optimal flow in time T P seudoMBCF ( m, n, (cid:15), C ) and given an al-gorithm B that computes a (1 − (cid:15) ) -capacity-fitted instance G for the given flow problem in time T CapacityF itting ( m, n, (cid:15), C ) .Then, there exists an algorithm C that computes a (1 − (cid:15) ) -optimal flow f in time ˜ O ( m ) + T P seudoMBCF ( m, n, Θ( (cid:15) ) , C ) + T CapacityF itting ( m, n, Θ( (cid:15) ) , C ) . with high probability. A Near-pseudo-optimal Flow in a capacity-fitted instance. We start by invoking algorithm B on G, s, t, C and (cid:15) , which returns a (1 − (cid:15) )-capacity-fitted instance G . We then invoke algorithm A on G , s , t , C and (cid:15) to obtain a (1 − (cid:15) )-pseudo-optimal flow ˆ g . Let g be the near-optimal flow thatproves ˆ g to be (1 − (cid:15) )-pseudo-optimal. We assume w.l.o.g. that in ˆ g , flow is only either on ( x, y ) or( y, x ) for any such pair of edges in E (here we just use flow cancellations).99 apping the Flow Back to G . Next, let us map the flow ˆ g back to G . We can firstly just applythe identity map to obtain ˆ g in G . We observe that if ˆ g would satisfy flow conservation constraintsin G (even only in the vertices V \ V ), then we could that the inverse of the transformation describedin Definition IV.4.1 to obtain G from G , and use it to map the flow on edges ( x, v e ) , ( v e , y ) (where x, y ∈ V but v e ∈ V \ V ) back to ( x, y ).But observe that if there is positive excess at a vertex v e ∈ V \ V , where again v e is the vertexassociated with edge e = ( x, y ) ∈ E , we can just route that excess back to x and y since the edges( x, v e ) and ( y, v e ) carry all the in-flow to v e (let us assume for convention that the flow is firstrouted back to x and then to y if excess is still at v e ). Since this monotonically decreases flow onevery edge (and thereby the in-flow to every vertex), it is easy to see that the resulting flow stillsatisfies capacity- and cost-feasibility constraints.Further, the resulting flow can now be mapped straight-forwardly to G . We denote this flow on G by ˆ f and again assume w.l.o.g. that ˆ f has flow either on edge ( x, y ) or on edge ( y, x ) but notboth. Routing the Remaining Excess in G . We now want to route the remaining excess in G .However, we first need to know the flow value from s to t that we want to route in G . We thereforesimply check the in-flow at t , and let F = in ˆ f ( t ) − out ˆ f ( v ). Next, we compute the excess vectorex ˆ f,s,t,F which is definedex ˆ f,s,t,F ( v ) = in ˆ f ( v ) − out ˆ f ( v ) if v = s, v = t in ˆ f ( v ) − out ˆ f ( v ) + F if v = s v = t Next, we want to construct a flow problem where we route a general demand χ ∈ R V (where P v χ ( v ) = 0). More precisely, we note that the vector ex ˆ f,s,t,F is a valid demand vector. We thenset up the graph to be G = ( V, E, u ), where u ( v ) = ∞ for any v ∈ V , and u ( e ) = (cid:15) · u ( v e )for each e ∈ E where v e is again the vertex in V ( G ) \ V that is associated with edge e . We do notdefine a cost function and observe that the created instance only has edge capacities by design. Feasibility Of Excess Routing. We note that we can indeed route ex ˆ f,χ s,t,F in G capacitatedby u . To see this, recall that g is the (1 − (cid:15) )-optimal flow certifying that ˆ g is (1 − (cid:15) )-pseudo-optimalin G . Let f be the flow on G obtained by mapping g to G just like we mapped ˆ g . Then, it isnot hard to see that f − ˆ f routes ex ˆ f,s,t,F . Since each edge e has | f ( e ) − ˆ f ( e ) | < (cid:15) · u ( v e ) byDefinition IV.2.1, our claim follows. Using Max-Flow for Excess Routing. We then use the following result for max flow onedge-capacitated graphs from [She13, Pen16] on G = ( V, E, u ). Theorem IV.5.2 (see Theorem 1.2 in [She13], [Pen16]) . Given a flow instance G = ( V, E, u ) and a demand vector χ (with P v χ ( v ) = 0 ). Then, there exists an algorithm that returns a flow f that obeys flow conservation constraints, and satisfies for each edge e ∈ E , f ( e ) ≤ · u ( e ) .For a graph G with polynomially bounded capacity ratio, the algorithm runs in time ˜ O ( m ) andsucceeds with high probability. We denote by f = (1 − (cid:15) ) (cid:16) ˆ f + f (cid:17) the flow obtained by combined the flow mapped from thecapacity-fitted instance and the max flow instance (after some careful scaling).100 easibility of f . From construction, it is not straight-forward to see that f satisfies flow conser-vation. Further, we have for every vertex v ∈ V ,in f ( v ) ≤ (1 − (cid:15) ) (cid:16) in ˆ f ( v ) + in f ( v ) (cid:17) ≤ (1 − (cid:15) ) u ( v ) + 2 X x ∈N ( v ) u ( x, v ) ≤ (1 − (cid:15) )(1 + 28 (cid:15) ) u ( v ) ≤ u ( v )where we used in the second inequality the feasibility of ˆ f , the guarantee from Theorem IV.5.2 on f to almost stipulate capacities, and Property 2 from Definition IV.4.2 which implies that all edgecapacities incident to v sum to at most 14 (cid:15) · u ( v ) which is a trivial upper bound on the amount offlow routed through v in f .We further have that c ( f ) ≤ (1 − (cid:15) ) (cid:0) c ( f ) + c ( f ) (cid:1) ≤ (1 − (cid:15) ) (cid:16) C + 28 · (cid:15)C (cid:17) ≤ C where we use in the second inequality that by Property 3 of Definition IV.4.2, the sum of ca-pacities times costs in G is bounded by 14 (cid:15)C and the fact that f satisfies f ( e ) ≤ u ( e ) byTheorem IV.5.2. Combined these facts prove that f is a feasible flow in G . Near-Optimality. It remains to conclude that since OP T G ,C ≥ (1 − (cid:15) ) OP T G,C by Property 4in Definition IV.4.2 and ˆ g is (1 − (cid:15) )-pseudo-optimal in G that the pseudo-flow ˆ f is (1 − (cid:15) ) -pseudo-optimal in G . Thus, we have that v ( f ) ≥ (1 − (cid:15) ) v ( ˆ f ) ≥ (1 − (cid:15) )(1 − (cid:15) ) OP T G,C . Rescaling (cid:15) bya constant factor, we obtain that f must be a (1 − (cid:15) )-optimal flow. This concludes our analysis. IV.6 Putting it all Together Finally, we combine our reduction chain with the main result of Part III: Theorem III.0.2. However,instead of using the main result of Part III directly, we rather prove that it can be used straight-forwardly to implement the data structure given in Definition IV.1.3. Theorem IV.6.1. There exists an implementation of the data structure given in Definition IV.1.3where for any τ = o (log / n ) , (cid:15) > / polylog( n ) and some β = b O (1) , the data structure can beimplemented with total running time T SSSP π ( m, n, W, τ, (cid:15), ∆ , ∆ ) = b O ( m log W + ∆ + ∆ ) .Proof Sketch. The proof is almost immediate from Theorem III.0.2, except that the data structurein Theorem III.0.2 deals with edge weights, while we require vertex weights w . In [CS20] a simpletransformation was described by defining edge weights for each edge ( x, y ) to be w ( x )+ w ( y )2 . Thenfor any path P from s to t where w ( s ) = w ( t ) = 0, the weight of P with regard to this edge weightfunction is equal to the weight in the vertex-weighted graph. Unfortunately, we cannot assume w ( s ) = w ( t ) = 0, but using the same idea we can create a small workaround that is presented inthe Appendix A.4.4.Let (cid:15) > / polylog( n ). Then, plugging in Theorem IV.6.1 into Theorem IV.3.1, we obtainprocedure to find a (1 − (cid:15) )-pseudo-optimal flow f in G in total time b O ( m ) with probability at least1 − n − .Using this result in Lemma IV.4.3, we again obtain total time b O ( m ) to produce a corresponding(1 − (cid:15) )-capacity-fitted instance G with probability at least 1 − n − (there are O (log n ) invocationsof the algorithm in Theorem IV.3.1, and we can take a union bound over the failure probabilityand assume that n is larger than some fixed constant).101inally, we use the reduction in Theorem IV.5.1 with the above running times to obtain anear-pseudo-optimal flow and a capacity-fitted instance, and obtain a (1 − (cid:15) )-optimal flow f in theoriginal graph G , again in total time b O ( m ) and with success probability at least 1 − n − .To obtain a proof for our main result, Theorem IV.0.1, we point out that we assumed in thechain of reduction above that G was derived from applying the reduction in Proposition IV.1.2.However, since our dependency of run-time is purely in terms of m (and not in n ) this does notlead to an asymptotic blow-up. The proof therefore follows immediately. Acknowledgements Aaron and Thatchaphol thank Shiri Chechik for a useful discussion at an early stage of this work.102 art A: Appendix A.1 Appendix of Part I A.1.1 Related Work In addition to our discussion of previous work in Section I.1.1, we also give a brief overview ofrelated work. Dynamic SSR and SSSP in Directed Graphs. While our article focuses on the decrementalSSSP problem in undirected graphs, there is also a rich literature for dynamic SSSP in directedgraphs and also for the simpler problem of single-source reachability and the related problem ofmaintaining strongly-connected components.For fully-dynamic SSR/ SCC, a lower bound by Abboud and Vassilevska Williams [AW14]shows that one can essentially not hope for faster amortized update time than ˜ O ( m ).For decremental SSR/ SCC, a long line of research [RZ08, Łąc13, HKN14b, HKN15, CHI + O ( mn ) totalupdate time barrier to b O ( mn / ) in the deterministic setting [BGS20].While incremental SSR can be solved straight-forwardly by using a cut-link tree, the incrementalSCC problem is not very well-understood. The currently best algorithms [HKM + 12, BFGT15]obtain total update time ˜ O (min { m / , n } ). Further improvements to time ˜ O (min { m √ n, m / } )for sparse graphs are possible for the problem of finding the first cycle in the graph [BC18, BK20],the so-called cycle detection problem.For fully-dynamic SSSP, algebraic techniques are known to lead to algorithms beyond the ˆΘ( m )amortized update time barrier at the cost of large query times. Sankowski was the first to givesuch an algorithm [San05] which originally only supported distance queries, however, was recentlyextended to also support path queries [BHG + (cid:15) )-approximation was given by van denBrand and Nanongkai in [vdBN19].The decremental SSSP problem has also received ample attention in directed graphs [ES81,HKN14b, HKN15, GW20, BGWN20]. The currently best total update time for (1 + (cid:15) )-approximatedecremental SSSP is ˜ O (min { n , mn / } log W ) as given in [BGWN20]. Further, [BGS20] can beextended to obtain a deterministic b O ( n / log W ) total update time algorithm.The incremental SSSP problem has also been considered by Probst Gutenberg, Wein and Vas-silevska Williams in [GWW20] where they propose a ˜ O ( n log W ) total update time algorithm. Dynamic APSP. There is also an extensive literature for the dynamic all-pairs shortest pathsproblems. 103n the fully-dynamic setting a whole range of algorithms is known for different approximationguarantees, and for the particular setting of obtaining worst-case update times [HK95, Kin99,DI01, DI04, RZ04, Tho05, Ber09, RZ12, ACT14, RZ16, HKN16, ACK17, vdBN19, PGWN20].Most relevant to our work is a randomized b O ( m ) amortized update time algorithm by Bernstein[Ber09] that obtains a (2 + (cid:15) )-approximation. An algorithm with faster update time is currentlyonly known for very large constant approximation [ACT14].Similarly, in the decremental setting there has been considerable effort to obtain fast algorithms[BHS07, BR11, AC13, HKN14a, HKN16, Ber16, Che18, GWN20, CS20, KŁ20, EFGW20]. Weexplicitly highlight two contributions for undirected graphs: in [HKN16], the authors obtain a O ( mn log n ) deterministic (1 + (cid:15) )-approximate APSP algorithm (a simpler proof of which canbe found in [GWN20]) and in [Che18] an algorithm is presented that for any positive integer k maintains (1 + (cid:15) )(2 k − b O ( mn /k polylog W ).The incremental APSP problem has also recently been studied [KL19]. Hopsets. We also give a brief introduction to the literature on hopsets. Originally, hopsetswere defined and used in the parallel setting in seminal work by Cohen [Coh00]. However, due totheir fundamental role in both the parallel and the dynamic graph setting, hopsets have remainedan active area of development. Following lower bounds on the existential guarantees of hopsets[ABP17], first Elkin and Neiman [EN19] and then Huang and Pettie [HP19] obtained almost optimalhopset constructions, where the latter was based on a small modification to the classic Thorup-Zwick emulators/ hopset [TZ06]. A.1.2 Alternative Statement of Min-Cost Flow Result We can also derive the following theorem straight-forwardly from a standard reduction that ap-plies Theorem I.1.2 a polylogarithmic number of time (essentially, once can apply Theorem I.1.2recursively for O (log nC/(cid:15) )) times and then use a max flow algorithm to route the tiny amount ofremaining demand cheaply). Theorem A.1.1. For any (cid:15) > / polylog( n ) , consider undirected graph G = ( V, E, c, u ) where costfunction c and capacity function u map each edge and vertex to a non-negative real. Let χ ∈ R n bea demand vector. Then, there is an algorithm that in m o (1) log log C time returns a feasible flow f that routes χ (i.e. B > f = χ where B is the associated (unweighted) incidence matrix of G ). Let f ∗ be the feasible flow with B > f = χ such that c ( f ) = X e ∈ E c ( e ) · | f e | + X v ∈ V c ( v ) · ( B > | f | ) v is minimized. Then, we can compute a flow f that is feasible and satisfies k B > f − χ k ≤ (cid:15) · k χ k and c ( f ) ≤ (1 + (cid:15) ) c ( f ∗ ) in time m o (1) . The algorithm runs correctly with high probability. A.1.3 Discussion of Applications We now expand on the discussion in Section I.1.3 and add explicit reductions or pointers to paperswhere they are stated clearly. We discuss the applications in the same order as in Section I.1.3. Applications of Mixed-Capacitated Min-Cost Flow. • A O (log n )-sparsest vertex cut algorithm in almost-linear time: a through explanation ofthe reduction to O (polylog( n )) vertex-capacitated flows and presented in Lemma D.4 in the104rXiv version of [CS20]. Their reduction in turn is based by making a rather straight-forwardobservation about the sparsest cut algorithm in [KRV09].• A O (log ( n ))-approximate algorithm for computing tree-width (and the corresponding treedecomposition) in b O ( m ) time: a formal reduction statement is again given in the ArXivversion of [CS20] in their Lemma D.6. They basically use straight-forwardly the result in[BGHK95] which reduces the problem to finding sparsest vertex cuts.• A high-accuracy LP solver by Dong, Lee and Ye [DLY20] with running time to b O ( m · tw( G A ) log(1 /(cid:15) )): the result is immediate from Theorem 1.1. in [DLY20] and our almost-linear time tree decomposition algorithm.• We provide an informal proof of the result below in Appendix A.1.4. Theorem A.1.2. Given any graph G = ( V, E ) with incidence matrix B , demand vector χ ∈ R n , and differentiable cost functions c e , c v : R ≥ → R ≥ growing (super-)linearly in theirinput for each e ∈ E and v ∈ V and each c i ( x ) for i ∈ E ∪ V, x ∈ R ≥ can be computed in b O (1) time. Let f ∗ be some flow minimizing min B > f = χ c ( f ) = X e ∈ E c e ( | f e | ) + X v ∈ V c v (( B > | f | ) v ) . Then, given the above, and (cid:15) > / polylog( n ) , letting C = c ( f ∗ )min i ∈ E ∪ V c i (0) , there is an algorithmthat in m o (1) polylog( C ) time, returns a flow f with k B > f − χ k ≤ (cid:15) · k χ k such that c ( f ) ≤ (1 + (cid:15) ) c ( f ∗ ) with high probability. Previous results for p -norm flow concentrated on solving the problem to high-accuracy (i.e. O (polylog(1 /(cid:15) ) dependence on (cid:15) ) but cannot handle weights [AKPS19, KPSW19]. Applications of Decremental SSSP. • Decremental (1+ (cid:15) )-approximate all-pairs shortest paths (APSP) in total update time b O ( mn ):the algorithm is immediate from running the (1 + (cid:15) )-approximate SSSP data structure fromTheorem I.1.1 from every vertex v ∈ V . On query for a distance from u to v , one can thenjust query the SSSP data structure at v in constant time.• Decremental b O (1)-approximate APSP with total update time b O ( m ): to this end, we useLemma II.6.3 which implies that we can maintain a covering of vertices, such that for each D ,the diameter of each core is smaller Dn o (1) , and each core has all vertices in its SSSP ball datastructure that are at distance at most Dn o (1) from some vertex in the core. Maintaining sucha covering for every D i = 2 i , for every two vertices u, v at distance D i − ≤ dist G ( u, v ) ≤ D i ,we can locate the correct covering by testing all values of i , and then find u, v either in thesame core, or in the SSSP ball data structure located at the core of the other vertex’s core.• Fully-dynamic (2 + (cid:15) ) approximate all-pairs shortest paths with b O ( m ) update time: there isessentially a reduction from fully-dynamic (2 + (cid:15) ) approximate APSP to decremental (1 + (cid:15) )-approximate SSSP in Section 3.3. of [Ber09]. A.1.4 Proof of Near-Optimal Flow for Flow under any Increasing Cost Function We now give an informal proof of the above theorem. For convenience, we assume that χ charac-terizes an s - t flow of value F ≥ c e map to 0, by usingthe edge splitting procedure described in Part IV which increases the number of vertices and edges105o O ( m ). Let us also assume that we can roughly compute C (to a two approximation via binarysearch).Now, given this instance, let us for each vertex v discretize c v ( x ) by finding values x , x , . . . , x k such that all c v ( x ) for x ∈ [ x i , x i +1 ) have c v ( x ) ∈ C poly( m ) [(1 + (cid:15) ) i , (1 + (cid:15) ) i +1 ). Observe that thisbounds k = O (polylog( n )). We note that we might not be able just from querying the function c v to find the precise values x i but we can find ˆ x i with ˆ x i = x i − O ( m ) ) by employing binarysearch. Since these differences are tiny, i.e. only a negligible amount of flow is mischaracterized inrounded cost, we will ignore this issue altogether.Finally, we create a min-cost flow instance from G , by adding for each vertex v ∈ V , k = O (polylog( n )) copies v , v , . . . , v k to the min-cost flow instance, where each vertex v i is assignedcost C poly( m ) · (1 + (cid:15) ) i +1 and capacity x i +1 − x i . Further, for every adjacent vertices v, w in theoriginal graph G , we add edges between their copies v i , w j ∀ i, j of cost 0 and infinite capacity. It isnot hard to see that the resulting instance has ˜ O ( m log C ) edges and maximum capacity C .Then invoke Theorem I.1.2 on the created instance. A proof of correctness is straight-forward. A.1.5 Proof of Lemma I.2.9 In this section we prove Lemma I.2.9, which was stated in the overview, but never explicitly provedin the main body of the paper. See Section I.2.4 for the lemma statement and relevant notation.The algorithm to compute the function κ in Lemma I.2.9 is given in the pseudocode for Al-gorithm 12 below. The algorithm follows the basic framework of congestion balancing and is ahighly simplified version of the while loop in Line 8 of Algorithm 3. Recall that diam G ( K ) (cid:44) max x,y ∈ K dist G ( x, y ). Algorithm 12: Algorithm to construct function κ in Lemma I.2.9 Input: An undirected graph G = ( V, E ) and a set K ⊆ V Output: A capacity function κ : V → R ≥ such that ( K, κ ) forms a capacitated vertexexpander in G and P v ∈ V κ ( v ) = ˜ O ( | K | diam G ( k )). Initialize κ ( v ) ← v ∈ V while There exists a sparse cut ( L, S, R ) with respect to K, κ do foreach v ∈ S do κ ( v ) ← κ ( v ) return κ Correctness Analysis: We now argue that the function κ returned by the algorithm satisfiesthe output guarantees. When the algorithm terminates, ( K, κ ) trivially forms a vertex expanderin G , since the while loop only terminates when no sparse cuts remains. The bulk of the proof isshowing that P v ∈ V κ ( v ) = ˜ O ( | K | diam G ( k )).We define a potential function similar (but simpler) to the one in Definition II.3.15. Definition A.1.3. We define the potential function Π( G, K, κ ) as follows. Let P be a collection ofall embeddings P where P embeds some graph W ∗ into G such that1. W ∗ is an unweighted star with V ( W ∗ ) = K Define the cost of each vertex v ∈ V to be log( κ ( v )) . For any path P in G let c ( P ) = P v ∈ P c ( v ) .The cost of an embedding P is c ( P ) = P P ∈P c ( P ) . We define Π( G, K, κ ) = min P∈ P c ( P ) , and wecall the corresponding P the minimum cost embedding into G . κ ( v ) ≤ n ∀ v ∈ V , since once κ ( v ) ≥ n/ L, S, R ) for which v ∈ S is by definition not a sparse cut. We thus have c ( v ) ≤ log( n ) ∀ v ∈ V .This in turn implies that at all timesΠ( G, K, κ ) ≤ | K | · diam G ( K ) · log( n ) . To see this, consider the star formed by picking an arbitrary vertex v ∈ K , and then letting theembedding P contain the shortest path in G from v to x for every x ∈ K . (Note that this pathmay include vertices in G \ K .) Each v − x path contains at most diam G ( K ) vertices by definitionof diameter. We have already shown that each vertex has cost c ( v ) ≤ log( n ). Finally, there are | K | − < | K | choices for x . Thus, the cost of this embedding is ≤ | K | · diam G ( K ) · log( n ).It is also trivial to check that at the beginning of the algorithm Π( G, K, κ ) = 0 (because κ ( v ) = 1for all vertices) and that since κ only increases, Π( G, K, κ ) is monotonically increasing.Now, consider any iteration of the while loop in the algorithm that returns a sparse vertex cut( L, S, R ). Let κ be the capacity function before the cut is found, and let κ be the capacity functionafter κ ( v ) is doubled for all v ∈ S . Using an argument identical to that of Lemma II.3.19, it is easyto check that Π( G, K, κ ) ≥ Π( G, K, κ ) + | L ∩ K | / . (A.1)The basic argument here is that at least | L ∩ K | / L to S ∪ R and thus go through S . But for each vertex in v ∈ S we have κ ( v ) = 2 κ ( v ), so c ( v ) = c ( v ) + 1.This argument can be formalized using the arguments of II.3.19.Let us again consider a sparse cut ( L, S, R ) returned by the while loop. Since the cut was sparse,we have that P v ∈ S κ ( v ) < | L ∩ K | . This implies that P v ∈ V κ ( v ) = P v ∈ V κ ( v ) + P v ∈ S κ ( v ) ≤ P v ∈ V κ ( v ) + | L ∩ K | . Combining this with Equation A.1 we see that whenever P v ∈ V κ ( v ) increasesby some ∆, Π( G, K, κ ) increases by at least ∆ / 3. But we know that Π( G, K, κ ) increases mono-tonically from 0 to Π( G, K, κ ) ≤ | K | · diam G ( K ) · log( n ). These two facts combined imply that atall times P v ∈ V κ ( v ) ≤ | K | · diam G ( K ) · log( n ), as desired. Discussion of Running Time: Since Lemma I.2.9 is only concerned with the existence of afunction κ , we did not concern ourselves with the running time of the algorithm. In particular, wedid not specify how to find the sparse cut S in the while loop. Below, we briefly discuss how suchan algorithm could be implemented.It is not hard to check that the algorithm goes through if we allow some slack in our requirementof the spare cut returned in the while loop, and that with this slack the cut can be computedin polynomial time. One could perhaps even compute the cut in almost-linear time using moresophisticated techniques. But the total time to compute κ will still not be linear because therecould be many iterations of the while loop: each iteration might find a sparse cut ( L, S, R ) with | L ∩ K | = n o (1) , in which case the number of iterations can be as large as b O ( | K | ), so the totalrunning time would be at least b O ( m | K | ), which could be as large as b O ( mn ).The above obstacle explains why our final algorithm settles on a function κ with the slightlyweaker guarantees of Lemma I.2.10. This relaxed lemma only guarantees capacitated expansion for balanced cuts ( L, S, R ), so the while loop always returns a sarse balanced cut ( L, S, R ), or returns κ if no such cut exists. This allows us to ensure that | L ∩ K | = Ω( | K | /n o (1) ) in each iteration, sothe number of iterations is only n o (1) . Lower Bound: We now prove the lower bound of the lemma. Consider the following graph G ,with K = V ( G ). Let G A , G B be vertex expanders, with n/ a be a vertex in107 B and b a vertex in G B . The graph G contains both G A and G B , as well as a path P from a to b with n/ | K | = | V ( G ) | = n and diam G ( K ) = diam( G ) = Θ( n ).It is not hard to check that any function κ for which ( K, κ ) is a capacitated expanders must have κ ( v ) ≥ n − o (1) / v ∈ P , so P v ∈ V ( G ) κ ( v ) = b Ω( n ) = b Ω( | K | diam G ( K )), as stated in thelemma. A.2 Appendix of Part II A.2.1 CertifyCore: Finding A Large Low-diameter Subset In this section, we prove Lemma II.3.4 which is restated below. Lemma II.3.4. There is an algorithm CertifyCore ( G, K, d, (cid:15) ) with the following input: an n -vertex graph G = ( V, E, w ) , a set K ⊆ V , an integer d > , and a parameter (cid:15) > . In time O (deg G (ball G ( K, d lg n )) log n ) , the algorithm either • (Scattered): certifies that for each v ∈ K , we have | ball G ( v, d ) ∩ K | ≤ (1 − (cid:15)/ | K | , or • (Core): returns a subset K ⊆ K , with | K | ≥ (1 − (cid:15) ) | K | and diam G ( K ) ≤ d lg n . We give pseudo-code for the procedure CertifyOrReturnCore ( G, K, d, (cid:15) ) in Algorithm 13. Algorithm 13: CertifyOrReturnCore ( G, K, d, (cid:15) ) K ← K G ← G [ball G ( K, d lg n )] while | K | > (1 − (cid:15)/ | K | do Let v be an arbitrary vertex from K i ← while deg G (ball G ( v, i + 1) d )) > G (ball G ( v, i · d )) do i ← i + 1 if | ball G ( v, i + 1) · d ) ∩ K | > (1 − (cid:15)/ | K | then K ← ball G ( v, i + 1) · d ) ∩ K return K else K ← K \ ball G ( v, i · d ) G ← G \ ball G ( v, i · d ) K ← ∅ return K Here, we initially set the set K to be the full set K and set the graph G to the graph G . Then,while there are vertices in K , we choose an arbitrary vertex v from K (in Line 4). We then searchthe smallest non-negative integer i , such that deg G (ball G ( v, ( i + 1) d )) ≤ G (ball G ( v, i · d )) byrepeatedly increasing i if for the current value of i if the property is violated (by visiting anotheriteration of the while-loop starting in Line 6). Finally, when the property is satisfied, we checkwhether the number of vertices in K , in the ball ball G ( v, i · d ) larger than (1 − (cid:15)/ | K | . If so, wehave found a subset of K of large size and small diameter and return ball G ( v, i · d ) ∩ K to endin the first scenario of our lemma. Otherwise, we remove the vertices in K that are in the ballball G ( v, i · d ) from K and the edges incident to the ball from G .Let us now analyze the procedure more carefully by proving a series of simple claims.108 laim A.2.1. Consider an execution of the while-loop starting in Line 3 where i final is the value i takes after the algorithm leaves the while-loop starting in Line 6. Then, if we enter the else-casein Line 11, we have for every vertex w ∈ ball G ( v, (2 i final + 1) d ) that | ball G ( w, d ) ∩ K | ≤ (1 − (cid:15)/ | K | . Proof. We have by the triangle inequality that ball G ( w, d ) ⊆ ball G ( v, i final + 1) d ). But sinceby the if-condition we have that ball G ( v, i final + 1) d ) contains at most a (1 − (cid:15)/ K the claim follows. Claim A.2.2. If the if-case in Line 8 is not entered then any vertex w ∈ K has at most (1 − (cid:15)/ | K | vertices in ball G ( w, d ) ∩ K .Proof. Let us first focus on vertices w ∈ K that have some vertex w from ball G ( w, d ) that is removedfrom G at some point of the algorithm. That is, the algorithm removes ball G ( v, i final · d ) forsome v ∈ K and some number i final and w ∈ ball G ( v, i final · d ) ∩ ball G ( w, d ).For each such vertex w , we consider the while-loop iteration starting in Line 3 at the time whenthe algorithm first removes a vertex w from ball G ( w, d ) from the graph G . We observe that up toLine 13, we have never removed a vertex w from ball G ( w, d ) from G and therefore up to this point,we have that ball G ( w, d ) = ball G ( w, d ) (technically we also have to argue that G is initialized toan induced graph of G but it is clear that none of the vertices not in G are in ball G ( w, d ) either).In fact, since vertices that are removed from K are in the balls that are removed from G , wehave in fact that up to this point ball G ( w, d ) ∩ K = ball G ( w, d ) ∩ K . But since the else-case inLine 11 is entered in the iteration where the first such vertex w exists, we have by Claim A.2.1that | ball G ( w, d ) ∩ K | = | ball G ( w, d ) ∩ K | ≤ (1 − (cid:15)/ | K | ≤ (1 − (cid:15)/ | K | , as desired. Note thatwe can invoke Claim A.2.1 because w ∈ ball G ( v, i final · d ) and so w ∈ ball G ( v, (2 i final + 1) · d )as needed in Claim A.2.1.Otherwise, we have by the same argument that for any vertex w in K that had no vertex w from ball G ( v, d ) removed from G that at termination of the while-loop starting in Line 3, we haveball G ( w, d ) ∩ K = ball G ( w, d ) ∩ K . But by the while-loop condition, we have that | K | ≤ (1 − (cid:15)/ | K | at that point which establishes our claim.This claim establishes the Core Property in Lemma II.3.4. It remains to establish the firstProperty in of the Lemma where we start with proving that the diameter of the core that isreturned in the if-statement in Line 8 is small. Claim A.2.3. The integer variable i is always chosen to be at smaller n .Proof. For the sake of contradiction, let us assume that there is a time when the variable i takes avalue larger-equal than 2 lg n .We first observe that in each iteration of the while-loop starting in Line 3, the variable i isinitialized to 0. Further, whenever i is increased by one, we have that deg G (ball G ( v, ( i + 1) d )) ≤ G (ball G ( v, i · d )). Thus, we have that• deg G (ball G ( v, ≥ 1, and• for every i < n . we have deg G (ball G ( v, ( i + 1) d )) > G (ball G ( v, i · d )).Therefore, by induction we can straight-forwardly establish thatdeg G (ball G ( v, n · d )) > n ≥ n . But this leads to a contradiction since it implies | E ( G ) | < deg G (ball G ( v, (2 lg n ) · d )).109e are now well-equipped to prove Lemma II.3.4 which is again restated below for convenience. Lemma II.3.4. There is an algorithm CertifyCore ( G, K, d, (cid:15) ) with the following input: an n -vertex graph G = ( V, E, w ) , a set K ⊆ V , an integer d > , and a parameter (cid:15) > . In time O (deg G (ball G ( K, d lg n )) log n ) , the algorithm either • (Scattered): certifies that for each v ∈ K , we have | ball G ( v, d ) ∩ K | ≤ (1 − (cid:15)/ | K | , or • (Core): returns a subset K ⊆ K , with | K | ≥ (1 − (cid:15) ) | K | and diam G ( K ) ≤ d lg n .Proof. We have that if Algorithm 13 returns in Line 10, then we have that the final K is of sizeat least (1 − (cid:15)/ | K | since the while-loop condition ensured that in the | K | > (1 − (cid:15)/ | K | and atmost a (1 − (cid:15)/ K remain in K in the if-statement in Line 8 by thecondition of the if-statement. Thus, there are at least (1 − (cid:15)/ − (cid:15)/ | K | > (1 − (cid:15) ) | K | verticesin the final K . Further, by only leaving vertices in K that are contained in the same ball in G of radius 2( i + 1) · d where i < n by Claim A.2.3, we certainly have that the diameter of thereturned set K in G ⊇ G is at most 2 · · n · d ≤ d log n (where we use the radius to thethe center of the ball and the triangle inequality). Thus, in this case, K satisfies the ScatteredProperty.Otherwise, we have by Claim A.2.2 that every vertex in K has only few vertices in K in its ballof radius d , thus satisfying the Core Property.Finally, let us bound the running time. Here we observe that each while-loop iteration startingin Line 3 can first run Dijkstra’s algorithm to compute the smallest i value and all informationfor the rest of the while-loop by running from the chosen vertex v on G to depth 2( i + 1) d .It is not hard to see that the running time of the entire loop iteration is therefore dominatedby O (deg G (ball G ( v, i + 1) d )) log n ). However, if we do not enter the if-case, we also removeΩ(deg G (ball G ( v, i + 1) d ))) edges from G in the else-loop since the ball ball G ( v, i · d ) has atleast half the volume by choice of i . Thus, the total running time of all such while-loops is upperbounded by O (deg G (ball G ( K, d lg n )) log n ). Since the algorithm returns upon entering the if-case, it can also only use additional time O (deg G (ball G ( K, d lg n )) log n | ) in the iteration notconsidered so far. This establishes the total running time and thereby the lemma. A.2.2 EmbedWitness: Embedding Expanders into Hypergraphs In this section, we show the procedure used by Algorithm 3 for either finding a sparse cut orembedding an expander into a hypergraph. The algorithm is a standard combination of flowalgorithms and the cut-matching game. The only non-standard element is that we need an algorithmfor finding sparse cuts in hypergraphs , which we call EmbedMatching , and which was alreadydeveloped in [BGS20].We now restate the Lemma EmbedWitness that we aim to prove. See Theorem A.2.5 belowfor the definition of parameter φ cmg . Lemma II.3.5. There is an algorithm EmbedWitness ( H, K, κ ) that is given a hypergraph graph H = ( V, E ) , a terminal set K ⊆ V , and /z -integral vertex capacities κ : V → z Z ≥ such that κ ( v ) ≥ for all terminals v ∈ K and κ ( v ) ≤ κ ( V ) / for all vertices v ∈ V . (The integralityparameter z will appear in the guarantees of the algorithm.) The algorithm returns either • (Cut): a vertex cut ( L, S, R ) in H such that (cid:15) wit | K | ≤ | L ∩ K | ≤ | R ∩ K | and κ ( S ) ≤ | L ∩ K | ,where (cid:15) wit = φ cmg / log ( n ) is a parameter we will refer to in other parts of the paper; OR • (Witness): an embedding P that embeds a weighted multi-graph W into H with the followingguarantees: W is a weighted Ω( φ cmg ) -expander. The vertex set V ( W ) is such that V ( W ) ⊆ K and | V ( W ) | ≥ | K | − o ( | K | ) . Each edge weight is a multiple of /z , where recall that z isthe smallest positive integer such that κ : V → z Z ≥ . The total edge weight in W is O ( | K | log | K | ) . Also, there are only o ( | K | ) vertices in W with weighted degree ≤ / . – The length of P and vertex congestion of P w.r.t. κ are at most O ( κ ( V ) log( κ ( V )) / ( | K | (cid:15) wit )) and O (log | K | ) , respectively. More precisely, each path in P has length at most O ( κ ( V ) log( κ ( V )) / ( | K | (cid:15) wit )) . For each vertex v ∈ V , P P ∈P v val( P ) = O ( κ ( v ) log | K | ) where P v is the set of paths in P containing v . Moreover, each path in P is a simplepath.The running time of the algorithm is ˜ O ( | H | κ ( V ) | K | φ cmg + zκ ( V ) /φ cmg ) , where | H | = P e ∈ E | e | and z isthe smallest positive integer such that κ : V → z Z ≥ . We now recap three existing lemmas that we use to prove Lemma II.3.5 First Ingredient: Embedding Matchings into Hypergraphs Our algorithm EmbedWitness uses as a subroutine an existing algorithm from [BGS20] that isgiven a hypergraph H and either finds a sparse cut in H or embeds a perfect matching into H withlow congestion. Lemma A.2.4 ([BGS20]) . There is an algorithm EmbedMatching ( H, A, B, κ, (cid:15) ) that is given ahypergraph graph H = ( V, E ) , two disjoint sets of terminals A, B ⊆ V where | A | ≤ | B | , a vertexcapacity function κ : V → z Z ≥ such that κ ( v ) ≥ for all terminals v ∈ A ∪ B and κ ( v ) ≤ κ ( V ) / for all vertices v ∈ V , and a balancing parameter (cid:15) > . (The integrality parameter z will appearin the guarantees of the algorithm.) Then the algorithm returns either • (Sparse Cut): a vertex cut ( L, S, R ) in H such that min {| L ∩ A | , | R ∩ B |} ≥ (cid:15) | A | and κ ( S ) ≤ {| L ∩ A | , | R ∩ B |} ; OR • (Matching): an embedding P that embeds a z -integral matching M from A to B of totalvalue at least (1 − (cid:15) ) | A | into H where the congestion of P w.r.t. κ is at most and the lengthof P is at most len( P ) ≤ O ( κ ( V ) log( κ ( V )) / ( | A | (cid:15) )) . More precisely, each path in P has lengthat most O ( κ ( V ) log( κ ( V )) / ( | A | (cid:15) )) and for each vertex v ∈ V , P P ∈P v val( P ) ≤ κ ( v ) , where P v is the set of paths in P containing v . Moreover, each path in P is a simple path.The running time of the algorithm is ˜ O ( | H | κ ( V ) (cid:15) | A | + zκ ( V ) /(cid:15) ) , where | H | = P e ∈ E | e | , and z is thesmallest parameter such that κ is z -integral, i.e. such that κ : V → z Z ≥ Second Ingredient: Cut-matching GameDeterministic Cut-matching Game. The cut-matching game is a game that is played betweentwo players, called the cut player and the matching player . The game starts with a graph W whosevertex set V has cardinality n , and E ( W ) = ∅ . The game is played in rounds; in each round i, thecut player chooses a partition ( A i , B i ) of V with | A i | ≤ | B i | , | A i | ≥ | V | / − | B i | ≥ | V | / − /z -integral matching M i that matches every vertexof A i to some vertex of B i . (That is, the total weight of edges in M incident to each vertex in A i isexactly 1 and the total weight of edges in M incident to vertex in B i is at most 1). The edges of M i are then added to W , completing the current round. (Note that W is thus a weighted multigraph;the edges of each M i are weighted, and if M i and M j both contain an edge ( x, y ) then for simplicitywe just think of W as containing two copies of ( x, y ).) Intuitively, the game terminates once graph111 becomes a φ -expander, for some given parameter φ . It is convenient to think of the cut player’sgoal as minimizing the number of rounds, and of the matching player’s goal as making the numberof rounds as large as possible. We will use the following theorem from [CS20] which says that thereis a fast deterministic algorithm for the cut player that ends this game within O (log n ) rounds. Theorem A.2.5. [Deterministic Algorithm for Cut Player (Theorem B.5 of [CS20] or Theorem7.1 of [BGS20])]Let φ cmg = 1 / Θ(log / n ) . There is a deterministic algorithm, that, for every round i ≥ , given the graph W that serves as input to the i -th round of the cut-matching game, produces,in time O ( zn/φ cmg ) , a partition ( A i , B i ) of V with | A i | ≤ | B i | , | A i | ≥ | V | / − , | B i | ≥ | V | / − such that, no matter how the matching player plays, after R = O (log n ) rounds, the resulting graphW is a φ cmg -expander, V ( W ) = V , and every vertex in W has weighted degree at least . Third Ingredient: Expander Pruning Finally, we restate the lemma for expander pruning Lemma II.3.6 ([SW19]) . There is an algorithm Prune ( W, φ ) that, given an unweighted decremen-tal multi-graph W = ( V, E ) that is initially a φ -expander with m edges, maintains a decrementalset X ⊆ V using ˜ O ( m/φ ) total update time such that W [ X ] is a φ/ -expander at any point of time,and vol W ( V \ X ) ≤ i/φ after i updates. Proof of Lemma II.3.5 Armed with the three ingredients above, we can now present the algorithm for EmbedWitness from Lemma II.3.5 Recall that z is the integrality parameter of input function κ , i.e. the smallestpositive integer such that κ : V → z Z ≥ .The algorithm EmbedWitness ( H, K, κ ) starts by initiating the cut-matching game (TheoremA.2.5) on vertex set K . Let R = O (log( | K | )) be the maximum number of rounds in the cut-matching game. The cut player from theorem A.2.5 provides the terminal sets A i , B i at everyround i . To simulate the matching player the algorithm EmbedWitness will, in each round,either find a sparse cut and terminate or return a matching M i from A i to B i . In particular, inround i of the cut-matching game, the algorithm runs EmbedMatching ( H, A i , B i , κ, (cid:15) wit ), where (cid:15) wit = φ cmg / log ( n ) is the parameter from EmbedWitness .If EmbedMatching ( H, A i , B i , κ, (cid:15) ) returns a cut ( L, S, R ) then EmbedWitness can returnthe same cut ( L, S, R ) and terminate.The other case is that EmbedMatching ( H, A i , B i , κ, (cid:15) ) returns a matching M ∗ i from A i to B i along with a corresponding embedding P i of M ∗ i into the graph H . Note that the algorithmcannot simply use M ∗ i as the matching M i in the ith round of the cut-matching game becausethe cut matching game requires a matching M i of size exactly | A i | , while EmbedMatching onlyguarantees that matching M ∗ i has size (1 − (cid:15) ) | A i | . To overcome this, the algorithm chooses anarbitrary set of “fake” edges F i ∈ A i × B i such that M ∗ i ∪ F i is a matching from A i to B i of sizeexactly | A i | ; the set F i can trivially be computed by repeatedly adding edges of weight 1 /z from an(arbitrary) unsaturated vertex in A i to an (arbitrary) unsaturated vertex in B i . (Adding multiplecopies of the same edge corresponds to increasing the weight of that edge.) The algorithm thenreturns M i = M ∗ i ∪ F i inside the cut-matching game. Note that unlike the edges of M ∗ i , we do notembed the fake edges of F i into G .If in any round i the subroutine EmbedMatching returns a cut then the algorithm terminates.Thus the only case left to consider is when in each round i the algorithm returns M ∗ i and P i . Let M ∗ be the union of all the M ∗ i and let F be the union of all the F i . Let W ∗ = ( V, M ∗ ∪ F ).112heorem A. . W ∗ is a φ cmg = 1 /n o (1) expander. Note, however, that we cannotreturn W ∗ as our witness because there is no path set corresponding to F (we never embedded theedges in F ). We also cannot simply remove F as M ∗ on its own might not be an expander.Instead, we apply expander pruning from Lemma II.3.6. Recall that W ∗ = ( V, M ∗ ∪ F ).We would like to apply pruning directly to W ∗ , but Lemma II.3.6 only applies to unweightedmulti-graphs. Since EmbedMatching guarantees that all edge-weights in M ∗ are 1 /z integral,we know that all edge weights in W are also multiples of 1 /z . We can thus convert W ∗ to anequivalent unweighted multigraph W ∗ u in the natural way: every edge e ∈ W ∗ is replaced by w ( e ) · z copies of an unweighted edge. Note that W ∗ has total weight O ( | K | log( | K | )), because it contains R = O (log( | K | )) matchings, each of weight O ( | K | ); thus W u contains O ( z | K | log( | K | )) edges. Wenow run Prune ( W ∗ u , φ cmg ), where we feed in all the edges in W ∗ u corresponding to F as adversarialdeletions. Let X ⊂ K be the set returned by pruning, set W u = W ∗ u [ X ] and W = W ∗ [ X ].We now define the embedding P of W into H . We will have that P ⊆ S P i . Consider any edge( u, v ) ∈ W . By construction of W , we know that ( u, v ) comes from some M ∗ i ; ( u, v ) cannot comefrom any of the F i , because all of the edges in F were pruned away. Thus, we simply add to P thepath from P i used to embed edge ( u, v ) ∈ M ∗ i .Let us now prove that W satisfies the desired properties of EmbedWitness . We know fromthe cut-matching game (Theorem A.2.5) that W ∗ has expansion φ cmg , so the same holds for W ∗ u ,since the two graphs clearly have identical expansion. By the guarantees of pruning, W u and W thus have expansion φ cmg / φ cmg ), as desired. It is clear by construction that V ( W ∗ ) ⊂ K ,so V ( W ) ⊂ K .Let us now argue that | V ( W ) | ≥ | K | − o ( | K | ). We know that V ( W ∗ u ) = V ( W ∗ ) = K . Re-call that Our algorithm feeds all edges in W ∗ u that correspond to F as adversarial deletions to Prune ( W ∗ u , φ cmg ). It is not hard to check that the number of such deleted edges is at most3 zR(cid:15) wit | K | = O ( zφ cmg | K | / log( n )), because each F i contains a total weight of at most 3 (cid:15) wit | A | ≤ (cid:15) wit | K | , there are R different values of i , and by construction the multiplicity of each edge in W ∗ u is equal to z multiplied by its weight in F . Thus, recalling that X ⊆ K is the set returned by Prune ( W ∗ u , φ cmg ), we have by Lemma II.3.6 that vol W ∗ u ( K \ X ) = O ( z | K | / log( n )). This impliesthat vol W ∗ ( K \ X ) = O ( | K | / log( n )); since we know from the cut-matching game (Theorem A.2.5)that every vertex in W has weighted degree at least 1, we have that | K \ X | = O ( | K | / log( n )), so | V ( W ) | = | X | ≥ | K | − o ( | K | ), as desired.We now argue about the weights in W . The fact that total edge weight in W is at most O ( | K | log( | K | )) follows from the fact that each matching M i has weight at most | K | and there are R = O (log( | K | )) rounds of the cut matching game. Finally, we need to show that there are only o ( | K | ) vertices in W with weighted degree ≤ / 10. This follows straightforwardly from the factsthat all vertices have weighted degree ≥ W ∗ (Theorem A.2.5) and that W = W ∗ [ X ], where,as argued in the paragraph above, vol W ∗ ( K \ X ) = O ( | K | / log( n )).We now argue about the embedding P . The congestion follows from the fact that each P i embeds M ∗ i with congestion 1 with respect to κ , so since there are at most R = O (log( | K | )) roundsin the cut-matching game, the total congestion in S i P i is at most log( | K | ) with respect to κ , sothe same holds for P because P ⊆ S P i . The length and simplicity of paths in P returned by EmbedWitness follow directly from the same guarantees on P i returned by EmbedMatching .We finally analyze the running time. The algorithm runs in R = O (log( K )) = O (log( n )) rounds.In each rounds, it runs EmbedMatching with (cid:15) = (cid:15) wit = φ cmg / log ( n ); plugging in the guaranteesof EmbedMatching we see that this runtime fits into the desired runtime of EmbedWitness .Each round also runs a single iteration of the cut-matching game, which requires O ( z | K | /φ cmg ) time(Theorem A.2.5); this satisfies the desired runtime of the lemma because by the input guaranteesof Lemma II.3.5 we have κ ( v ) ≥ ∀ v ∈ V so z | K | /φ cmg ≤ zκ ( V ) /φ cmg . It is clear that the113ime to construct each F i is at most O ( z | K | ). Finally, the algorithm performs a single execution of Prune ( W ∗ u , φ cmg ); by Lemma II.3.6 this requires time ˜ O ( z | V | /φ cmg ) = ˜ O ( zκ ( V ) /φ cmg ), as desired. A.2.3 Proof of Claim II.3.9 Claim II.3.9 (Side-Conditions) . Whenever the algorithm invokes EmbedWitness ( · ) , we have1. κ ( v ) ≥ for all terminals v ∈ K ,2. κ ( v ) ≤ κ ( V ) / .Proof. Property 1: follows immediately from the initialization of κ in Line 3 and the fact that κ ismonotonically increasing over time.Property 2: EmbedWitness ( · ) is only invoked in Line 5. We prove by induction on thetime that EmbedWitness ( · ) is executed. Initially, we have that | b V | ≥ 2, and by the valueschosen for initialization in Line 3, it is immediate that the condition is true before the first time EmbedWitness ( · ) is invoked. For the inductive step, observe that in between two invocationsof EmbedWitness ( · ), the property can only be affected if the former invocation produced at cut( L, S, R ), prompting the algorithm to enter the while-loop in Line 5. The capacity of vertices in b V \ ( S ∪ { w } ) remains unchanged during this step. Since κ is monotonically increasing, this impliesby the induction hypothesis that only one of the vertices S ∪ { w } might violate the property. But w ∈ S is chosen in Line 7 to have maximal capacity among vertices in S , and w = w has capacityat least as large as w . It remains to show that w is not violating the property. But this followssince either the capacity of w is unchanged and we can therefore use the induction hypothesis, orit is equal to the capacity of w and therefore not more than half of the total capacity. A.3 Appendix of Part III A.3.1 Simplifying Reduction for SSSP Data Structures In this section, we prove both Proposition II.1.2 and Proposition III.1.1. However, since Propo-sition III.1.1 is a more involved version of Proposition II.1.2, we only prove the former one. Itis straight-forward from inspecting the proof that it extends seamlessly to Proposition II.1.2. Westart by restating the theorem. Proposition III.1.1. Suppose that there is a data structure SSSP π ( H, s, (cid:15), β, q ) that only worksif H satisfies the following properties: • H always stays connected. • Each update to H is an edge deletion (not an increase in edge weight). • H has maximum degree . • H has edge weights in [1 , n H ] and edges steadiness [0 , σ max + 1] .Suppose SSSP π ( H, s, (cid:15), β, q ) has T SSSP π ( m H , n H , (cid:15) ) total update time where m H and n H are num-bers of initial edges and vertices of H . Then, we can implement SSSP π ( G, s, O ( (cid:15) ) , O ( β log( W n )) , O ( q )) where G is an arbitrary decremental graph with m initial edges that have weights in [1 , W ] andsteadiness in [0 , σ max ] using total update time of ˜ O (cid:0) m/(cid:15) + T SSSP π ( O ( m log W ) , m, (cid:15) ) (cid:1) · log( W ) . For our proof, we first state the following result which is derived by a straight-forward extensionof Theorem 2.3.1. in [PG20]. 114 heorem A.3.1 (see [PG20]) . For any /n < (cid:15) < / , given a data structure that maintains SSSP π ( H, s, (cid:15), β, q ) on any graph H with edge weights in [1 , n H ] in time T SSSP ( m H , n H , (cid:15) ) (wherewe assume that distance estimates are maintained explicitly). Then, there exists a data structure,that maintains SSSP π ( G, s, (cid:15), β, q ) on a graph G with weights in [1 , W ] for any W in time ˜ O ( m/(cid:15) + T SSSP π ( m, n, (cid:15) )) · log( W ) . We note that all graphs H on which the SSSP data structure is run upon are subgraphs of G at anystage. We can then apply the following series of transformations of G to derive Proposition III.1.1. Ensuring Connectivity. Given the decremental graph G , we use a Connectivity data structure(see [HDLTT01, WN13]) which allows us to remove any edge deletion that disconnects the graphfrom the update sequence. We let G be the resulting decremental graph. We can then run theSSSP data structure only on G instead of G . To obtain a distance estimate from the source tosome vertex v in V , we can first query the Connectivity Data Structure on G if v and the sourceare in the same connected component. If not, we return ∞ . Otherwise, we forward the query tothe SSSP data structure and return the distance estimate. For a formal argument that this givescorrect distance estimates, we refer the reader to [GWN20]. Edge Deletions, no Weight Increases. For the second property, we preprocess G so that foreach edge ( u, v ) of weight w G ( u, v ), we split ( u, v ) into d log W n e multi-edges of weight w G ( u, v ) , (1+ (cid:15) ) · w G ( u, v ) , (1 + (cid:15) ) · w G ( u, v ) , . . . respectively. Then, an edge weight increase of ( u, v ) to w G ( u, v )can be emulated by deleting all versions of ( u, v ) that have weight smaller w G ( u, v ) from thegraph. It is not hard to see that the resulting decremental graph preserves all distance to a(1 + (cid:15) )-approximation, only undergoes edge deletions, not edge weight increases and has at most O ( m log W ) edges. We denote by G the resulting graph. Ensuring Small Degree. Given the decremental graph G , we can for each vertex v with degreedeg G ( v ) > 3, add deg G ( v ) vertices to G and connect them among each other and with v by apath where we assign each edge the weight 1 /n . Then, we can map each edge that was originallyin G and incident to v to one vertex on the path. It is not hard to verify that after thesetransformations the resulting graph G has maximum degree 3 and each distance is increased byat most an (cid:15) fraction (this follows since the original paths might now also have to visit the newlycreated line paths but these paths consist of at most m ≤ n edges, thus the total contribution foreach vertex on the path is at most 1 /n < (cid:15) but each original edge on the path has weight at least 1by assumption). Note that the number of vertices in G is at most m + n ≤ m and the number ofedges is at most 2 m . Also note that we can multiply all edge weight above by n to satisfy againthat all edge weights are positive integers. This increase the weight ratio to W n . We denote by G the resulting graph. Ensuring Small Weight Ratio. Finally, we can apply Theorem A.3.1 on G to obtain a datastructure SSSP π ( G , s, (cid:15), β, q ). Observe that each distance estimate maintained by this datastructure from s to a vertex in V , approximates the distance in G by a factor of (1 + (cid:15) )(1 + 6 (cid:15) ) =(1 + O ( (cid:15) )). 115 ueries on G . Finally, we discuss how to conduct path-queries. We point out that given thedata structure SSSP π ( G , s, (cid:15), β, q ), when we conduct a path-queries, for a path π ( s, t ) from s tosome vertex t , it returns edges in G instead of G . However, it is rather straight-forward by goingbackwards through the transformations from G to G , to see that each such path can be mappedback unambiguously to a s - t path in G of weight at most equal to the weight in G .We consider this path in G that the path is mapped to and discuss how to implement thesubpath-query for an index j ∈ [0 , σ max ] given, in time O ( σ ≤ j ( π ( s, t )) · q ). To this end, we do thefollowing: for each edge e = ( u, v ) in G , we only keep the heaviest copy of e in G . Note that anypath including such an edge copy has weight ≥ nW , thus we can ignore all such paths in our queryand are therefore ensured that no edge that was deleted from G but not G appears on any path.For G , we give each copy of an edge e = ( u, v ), the steadiness σ ( e ). Note that if we maintaina data structure that maintains β -edge-simple paths, then the edge e and its copy are present atmost β · log W n times. For the transformation to G , we simply give each edge that was not inthe graph formerly (i.e. is used to split a vertex into multiple vertices of low degree), the higheststeadiness class σ max + 1. Such edges, do not appear in any path query since the highest steadinessin G was σ max . It is straight-forward to establish that this ensures the properties stated in theProposition. A.4 Appendix of Part IV A.4.1 Proof of Proposition IV.1.2 Proposition IV.1.2. Given G = ( V, E, c, u ) with as defined above with capacities and costs takenover E ∪ V , C and /n < (cid:15) < and m ≥ . Then, there is a G = ( V , E , c , u ) with s and t and C = 32 m such that:1. ( x, y ) ∈ E iff ( y, x ) ∈ E . Further, for each ( x, y ) ∈ E , c ( x, y ) = 0 and u ( x, y ) = ∞ , and2. c ( s ) = c ( t ) = 0 , and3. V is of size n G ≤ m + n + 2 , E of size m G ≤ m + 4 , and4. for each vertex x ∈ V ( G ) , c ( x ) ∈ [1 , m ] ∪ { } and u ( x ) ∈ [1 , m ] , and5. there is a map M G → G that maps any (1 − (cid:15) ) -optimal s - t flow f in G to a (1 − (cid:15) ) -optimal s - t flow f in G . The flow map can be applied in O ( m ) time and G can be computed in O ( m log n ) time. In order to prove the proposition, we start by computing a crude approximation to OP T G,C . Claim A.4.1. In time O ( m log n ) , we can compute ˜ U , such that OP T G,C / m ≤ ˜ U ≤ OP T G,C .Proof. In order to find such ˜ U , we use Algorithm 14.Let us now carry out the analysis of correctness for the algorithm:• OPT G , C / ≤ ˜U : We aim to show that for some iteration i , we have λ i ≥ OP T G,C / m .We start by observing that we have for any feasible s - t flow f in G of value OP T G,C , thatthere is some s - t path P in G , such that each edge carries at least OP T G,C /m flow.Let i be the smallest index as defined in the algorithm, such that P is contained in G i , i.e.let e i be the heaviest edge on P in terms of c approx . Next, observe that since T i is a maximumspanning forest with regard to u approx , the path T i ( s, t ) has min-capacity larger-equal to P , i.e.larger-equal to OP T G,C /m . In particular, this means min e ∈ T i ( s,t ) u approx ( e ) ≥ OP T G,C / m .116 lgorithm 14: CrudeApproxOpt ( G, C ) foreach e = ( x, y ) ∈ E do u approx ( e ) ← min { u ( e ) , u ( x ) , u ( y ) } . c approx ( e ) ← max { c ( e ) , c ( x ) , c ( y ) } . Let e , e , . . . , e m be an ordering of the edges such that c approx ( e i ) ≤ c approx ( e i +1 ) for all i . for i ∈ { , . . . m } do E i ← { e , e , . . . , e i } . Compute a maximum spanning forest T i in G i = ( V, E i ) with edges weighted by u approx . λ i ← min { min e ∈ T i ( s,t ) u approx ( e ) , C/ (2 m · c approx ( e i )) } . return ˜ U = max i λ i Further, routing a single unit of flow along P is at cost at least c approx ( e i ). Also, we now fromabove that c approx ( e i ) · OP T G,C /m ≤ C . Thus, C/ (2 m · c approx ( e i )) ≥ OP T G,C / m . Thisestablishes the case.• ˜U ≤ OPT G , C : Observe for each iteration i , the amount λ i can be routed in G since the path T i ( s, t ) has min-capacity at least λ i and each of the at most m edges and n ≤ m vertices on T i ( s, t ) contributes at most λ i · c approx ( e i ) ≤ ( C/ (2 m · c approx ( e i ))) · c approx ( e i ) ≤ C/ m cost,as desired.Finally, let us discuss the running time of Algorithm 14. We observe that the ordering of e , e , . . . , e m can be done in O ( m log n ) time using classic sorting algorithms. For the for-loop, we observe thatwe can use a dynamic tree data structure in combination with Prim’s classic maximum spanningforest algorithm (see [Tar83, ST83, CLRS09]). This allows us to implement each loop iteration inonly O (log n ) time, since we can use the dynamic tree also to check for the min-capacity on T i ( s, t )in iteration i in O (log n ) time. This completes the analysis.We assume henceforth that we have ˜ U with guarantees described in Claim A.4.1. We can nowdescribe how to obtain G from G as stated in proposition IV.1.2. Throughout this section, we usethe parameters τ u = ˜ U / m and τ c = C · m/ ˜ U . Using these two parameters, we define two refinedversions of V and E that restrict them to include only items of reasonable cost and capacity: V reasonable = { v ∈ V | u ( x ) ≥ τ u and c ( x ) ≤ τ c } E reasonable = { e ∈ E | u ( e ) ≥ τ u and c ( e ) ≤ τ c } ∩ ( V reasonable × V reasonable ) . Given these preliminaries, we can now define G . Vertex Set V . We define V , the vertex set of G , to consist of the vertices in V reasonable , twospecial vertices s , t and an additional vertex v x,y for each pair of anti-parallel edges ( x, y ) , ( y, x ) ∈ E reasonable (here v { x,y } = v { y,x } ). Edge Set E . We define the edge set E to be such that for each edge ( x, y ) ∈ E reasonable , thatthere are two edges ( x, v { x,y } ) , ( v { x,y } , y ). Finally, we insert edges ( s, s ) , ( s , s ) , ( t, t ) , ( t , t ) into E . Cost and Capacity Functions. We list the edge and vertex capacities and cost in detail in thelist below. Here, we define γ u = τ u and γ c = C/ (4 ˜ U m ).Finally, we also define C = C/ ( γ c · γ u ) = 32 m . We now prove Proposition IV.1.2, Propertyby Property: 117tem Cost Capacity e ∈ E c ( e ) = 0 u ( e ) = ∞ x ∈ { t , s } c ( x ) = 0 u ( x ) = ˜ U · m /γ u x ∈ V reasonable c ( x ) = max { c ( x ) /γ c , } u ( x ) = min { u ( x ) , ˜ U · m } /γ u ( x, y ) , ( y, x ) ∈ E reasonable c ( v { x,y } ) = max { c ( x, y ) /γ c , } u ( v { x,y } ) = min { u ( x, y ) , ˜ U · m } /γ u 1. Observe first that for any ( x, y ) ∈ E with ( x, y ) ∈ E reasonable , we also have ( y, x ) ∈ E reasonable by definition. Further, since we insert for each such anti-parallel edges ( x, y )( y, x ), the edges( x, v { x,y } ) , ( v { x,y } , y ) , ( y, v { x,y } ) , ( v { x,y } , x ), and since the only other edges inserted are theedges ( s, s ) , ( s , s ) , ( t, t ) , ( t , t ), we have that all edges in E are anti-parallel.2. By definition of the cost and weight functions in s , t .3. Each vertex in V is uniquely associated to either a vertex from V , or an edge from E , or is s or t . Thus, V is of size at most m + n + 2. Since we split each edge in E into two andadd these edges to E , and then only add an additional 4 edges, we have that E is of size atmost 2 m + 4.4. Since by definition of V reasonable and E reasonable all elements of these sets are mapped by u toa real of size at least τ u = γ u , we have that all capacities in G are at least 1. Further, allcapacities are capped at ˜ U · m /γ u = 16 m .For the costs, we observe that all costs of elements in V reasonable and E reasonable are at most τ c = C · m/ ˜ U in G . By setting γ c = C/ (4 ˜ U m ), we further have that the largest cost in G is C · m/ ˜ UC/ (4 ˜ Um ) = 32 m . The smallest cost is at least 1 since we set each c ( x ) for x ∈ V to beat least 1 by definition.5. First, consider any feasible s - t flow f in G of value F . Here, we can assume w.l.o.g.that only a single anti-parallel edge carries any flow. We can then construct a flow f in G by assigning for each ( x, y ) , ( y, x ) ∈ E reasonable the flow f ( x, y ) = f ( x, v { x,y } ) · γ u and f ( y, x ) = f ( v { x,y } , y ) · γ u . It is not hard to see that f is a feasible flow in G and of value F · γ u . That is, we can map each flow in G to a flow f in G of the same flow value (up toscaling by γ u ).It thus only remains to show that there is a feasible s - t flow f of flow value at least (1 − (cid:15) ) · OP T G,C /γ u . To this end, let f be a feasible s - t flow in G of value OP T G,C . Further let P bea flow path decomposition of f , where each P ∈ P sends v ( P ) flow from s to t .Let P be the set of paths P ∈ P such that P is fully contained in G [ E reasonable ]. Then,construct a flow f in G by routing for each path P ∈ P , v ( P ) · γ u · (1 − (cid:15)/ 4) units of flowalong the corresponding path in G (i.e. map each edge ( x, y ) in P to ( x, v { x,y } ) , ( v { x,y } , y ) toobtain a path in G ).We claim that f is a feasible s - t flow in G of value at least (1 − (cid:15) ) OP T G,C /γ u (and thus canbe easily extended to a feasible s - t -flow of the same value). To see this, let us first observethat capacity constraints in G are equal to the ones in G up to scaling by γ u and capping atthe optimum flow value of f . Thus, capacity constraints are satisfied in G .For cost-feasibility, we observe that the only way that costs are increased (after scaling by γ c ) is if a cost c ( x ) /γ c was so small that c ( x ) is rounded up to 1. Since we scale the flow notonly by γ u but also by (1 − (cid:15)/ f in G of at most C/ ( γ u · γ c ) · (1 − (cid:15)/ 4) = (1 − (cid:15)/ C . Butrounding up small costs, results in additional cost of f of at most U m/γ u · ≤ m . But118e have that C = 32 m , thus this is at most a (cid:15)/ C for every (cid:15) ≥ /n . Thisestablishes cost-feasibility of f in G .It remains to show that the flow value of f is large. Now, if we would have that every pathin P would be also in P , then the flow f would be exactly of value OP T G,C · γ u · (1 − (cid:15)/ P that is not in P must have carried a small flow anyway sinceeither• the capacity of some vertex or edge x on the path P was smaller τ u . But note that thisimplies that such P carried at most τ u units of flow by capacity-feasibility. But thereare at most m such paths, thus the total amount of flow in G along such paths is upperbounded by m · τ u = ˜ U / m .• the cost of some vertex or edge x on path P was larger than τ c . But then, we have that thetotal amount of flow in G on all such paths can be at most C/τ c ≤ ˜ U / m ≤ OP T G,C / m .Combined, all paths in P that do not participate in P carried at most a ˜ U / m ≤ OP T G,C / m units of flow in f which is just a ( (cid:15)/ f is of value at least (1 − (cid:15)/ − (cid:15)/ · OP T G,C · γ ≥ (1 − (cid:15) ) · OP T G,C · γ .Thus, any (1 − (cid:15) )-optimal flow f has flow value at least (1 − (cid:15) ) · OP T G,C and we have asimple transformation of f to a flow f in G of value (1 − (cid:15) ) · OP T G,C .The running time of applying flow map and for computing ˜ U are rather straight-forward, thelater is implied by Claim A.4.1. A.4.2 Proof of Claim IV.4.5 Let us restate and prove Claim IV.4.5. Claim IV.4.5. For every vertex v ∈ V , any ≤ j ≤ j max , X x ∈N G ( v ) u j ( x ) ≤ m (2 − (cid:15) ) j · U /(cid:15) + 10 u ( v ) . In particular, we have P x ∈N G ( v ) u j max ( x ) ≤ · u ( v ) .Proof. We prove by induction on j . For j = 0, observe that there are at most m edges incident toa vertex v in V , and for each edge that is incident to v in G there is exactly one vertex in G that isin v ’s neighborhood. Since we set u ( x ) to at most 2 U for each vertex in G , the base case follows.For the induction step j j + 1, we observe that by the induction hypothesis we have that P x ∈N G ( v ) u j ( x ) ≤ m (2 − (cid:15) ) j · U /(cid:15) + 10 · u ( v ). Next, we observe that by assumption on A , there isa feasible flow f j such that | in f j ( z ) − in g j ( z ) | ≤ (cid:15) · u ( y ) and we have out f j ( z ) = in f j ( z ) ≤ u j ( z ) foreach z ∈ V (and in particular for z ∈ N ( v ) ∪ { v } ).Observing that each vertex v e ∈ N G ( v ) corresponds to an edge e = ( x, y ) in E , we have that v e has one in-edges ( x, v e ) and one out-edge ( v e , y ) in G . Using the above facts, it is thus not hardto derive that X v e ∈N G ( v ) ,e =( x,y ) g j ( x, v e ) + g j ( v e , y ) ≤ · m (2 − (cid:15) ) j · U + (cid:15) · · u ( v ) + 2 · u ( v ) ≤ · m (2 − (cid:15) ) j · U + (2 + 20 (cid:15) ) · u ( v ) . Next, we observe that the total capacity of edges ( x, v ) or ( v, x ) in E that carry flow greater-equalto half the capacity of either x or v can have at most total capacity 2 times the right-hand-side ofthe above equation by a simple pigeonhole-principle style argument.119ince the rest of the capacities are halved, we thus have that X x ∈N G ( v ) u j +1 ( x ) ≤ (cid:18) · m (2 − (cid:15) ) j · U + (2 + 20 (cid:15) ) · u ( v ) (cid:19) + m (2 − (cid:15) ) j · U /(cid:15) + 10 · u ( v )2 ≤ (1 + 8 (cid:15) ) · m (2 − (cid:15) ) j · · U /(cid:15) + (9 + 40 (cid:15) ) · u ( v ) ≤ m (2 − (cid:15) ) j +1 · U /(cid:15) + 10 · u ( v )where we use (cid:15) ≤ / 40 and 1 + x ≤ e x ≤ x + x for x ≤ A.4.3 Proof of Claim IV.4.6 Let us restate and prove Claim IV.4.6. Claim IV.4.6. Define G j = ( V , E , c , u j ) to be the graph that A is invoked upon during the j th iteration of the for-loop starting in Line 5. Then, we have that for every j ≥ , OP T G j +1 ,C ≥ (cid:0) − (cid:15) (cid:1) OP T G j ,C . In particular, we have OP T G jmax ,C ≥ (1 − (cid:15) ) j max OP T G ,C ≥ (1 − (cid:15) ) OP T G ,C .Proof. Observe that in the j th iteration of the for-loop, we obtain a (1 − (cid:15) )-pseudo-optimal flow g j with regard to the current instance G j by Theorem IV.3.1. By Definition IV.2.1, this implies thatthere is a feasible flow f j in G j of value at least (1 − (cid:15) ) OP T G j ,C with | in g j ( v ) − in f j ( v ) | < (cid:15) · u j ( v ).Now, consider the flow f j = (1 − (cid:15) ) f j , we claim that f j is feasible in G j +1 which implies ourclaim, since it is straight-forward to see that OP T G j +1 ,C ≥ v ( f j ) = (1 − (cid:15) ) v ( f j ) ≥ (1 − (cid:15) )(1 − (cid:15) ) OP T G j ,C ≥ (1 − (cid:15) ) OP T G j ,C where v ( · ) gives the value of the flow, and where we used the feasibility of f j in G j +1 for the firstinequality and 1 + x ≤ e x ≤ x + x for x ≤ 1, and (cid:15) < / f j is feasible, observe first that we have for any vertex v , with in f j ( v ) ≤ u j ( v ) / v since u j +1 ( v ) ≥ u j ( v ) / ≥ in f j ( v ) .On the other hand, if in f j ( v ) ≥ u j ( v ) / 2, we have that the flowin f j ( v ) ≥ (1 + 2 (cid:15) ) u j ( v ) / u j ( v ) / (cid:15) · u j ( v )where we again use 1 + x ≤ e x ≤ x + x for x ≤ 1, and (cid:15) < / 2. But since in g j ( v ) differs fromin f j ( v ) by at most (cid:15) · u j ( v ), we have that in g j ( v ) ≥ u j ( v ) / u j +1 ( v ) = u j ( v ).Thus, since in f j ( v ) < in f j ( v ) ≤ u j ( v ), we also have that the capacity constraint is satisfied for theseedges.For the final claim, we observe that OP T G jmax ,C ≥ (cid:0) − (cid:15) (cid:1) j max OP T G ,C ≥ (1 − (cid:15) ) OP T G ,C since (1 − (cid:15) ) j max ≥ e − (cid:15) · j max = e − (cid:15) ≥ − (cid:15) by the definition of (cid:15) and 1 + x ≤ e x ≤ x + x for x ≤ 1, and (cid:15) < / 64. 120 .4.4 Proof of Theorem IV.6.1 Theorem IV.6.1. There exists an implementation of the data structure given in Definition IV.1.3where for any τ = o (log / n ) , (cid:15) > / polylog( n ) and some β = b O (1) , the data structure can beimplemented with total running time T SSSP π ( m, n, W, τ, (cid:15), ∆ , ∆ ) = b O ( m log W + ∆ + ∆ ) .Proof. Let us take the original graph G with vertex weights w . We create two instances of the datastructure in Theorem III.0.2:• We first define w to be the vertex weights over V such that for all v ∈ V \ { s, t } , w ( v ) = w ( v )and w ( s ) = 0 and w ( t ) = 2 · w ( t ). We then define an edge weight function w ( x, y ) = w ( x )+ w ( y )2 for ( x, y ) ∈ E , that takes the average weight over the endpoints of each edge. Welet e d ( t ) denote the estimate maintained for the distance from s to t in the graph weightedby w . Observe that any 1-simple s to t path in w has equal weight as in w (recall thatthe first vertex on the path does not incur any weight contributing in our definition). Thus,dist w ( s, t ) ≤ e d ( t ) ≤ (1 + (cid:15) )dist w ( s, t ), i.e. the distance estimate is with regard to vertexweights w .• Next, let us define a weight function w over the vertices, defined by w ( v ) = w ( v ) for v ∈ V \ { s, t } , w ( s ) = (cid:15) e d ( t ) / w ( t ) = 2 · w ( t ) (observe that w only differs from w in s ). Finally, we define an edge weight function w ( x, y ) = w ( x )+ w ( y )2 for ( x, y ) ∈ E .We then run a data structure E as described in Theorem III.0.2 on w and set the approx-imation parameter to (cid:15) = (cid:15)/ 16. Observe that the shortest s to t path in w has weight atmost dist w ( s, t ) + w ( s ) / < (1 + (cid:15)/ w ( s, t ), and that each s to t path in w is of evensmaller weight in w .Thus, the vertex s can only occur at most once on any (1 + (cid:15) )-approximate shortest pathfrom s to t by the size of w ( s ) (this is important since w ( s ) might be very large). Therefore,any such path is (1 + (cid:15) )(1 + (cid:15)/ ≤ (1 + (cid:15) )-approximate with respect to w .We conclude that the s - t paths maintained by E are (1 + (cid:15) )-approximate and using thefeature of path queries straight-forwardly, we can implement a data structure as described inDefinition IV.1.3.The update time then follows simply by using the bounds from Theorem III.0.2.121 art B: Bibliography [ABD + 06] James Aspnes, Costas Busch, Shlomi Dolev, Panagiota Fatourou, Chryssis Georgiou,Alexander A Shvartsman, Paul G Spirakis, and Roger Wattenhofer. Eight open prob-lems in distributed computing. Bulletin of the EATCS , 90:109–126, 2006. 2[ABP17] Amir Abboud, Greg Bodwin, and Seth Pettie. A hierarchy of lower bounds for sub-linear additive spanners. In Proceedings of the Twenty-Eighth Annual ACM-SIAMSymposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira,January 16-19 , pages 568–576, 2017. 104[AC13] Ittai Abraham and Shiri Chechik. Dynamic decremental approximate distance oracleswith (1 + (cid:15), 2) stretch. arXiv preprint arXiv:1307.1516 , 2013. 104[ACK17] Ittai Abraham, Shiri Chechik, and Sebastian Krinninger. Fully dynamic all-pairsshortest paths with worst-case update-time revisited. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms , pages 440–452. SIAM,2017. 104[ACT14] Ittai Abraham, Shiri Chechik, and Kunal Talwar. Fully dynamic all-pairs shortestpaths: Breaking the o (n) barrier. In Approximation, Randomization, and Combina-torial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014) . SchlossDagstuhl-Leibniz-Zentrum fuer Informatik, 2014. 104[AKPS19] Deeksha Adil, Rasmus Kyng, Richard Peng, and Sushant Sachdeva. Iterative re-finement for lp-norm regression. In Proceedings of the Thirtieth Annual ACM-SIAMSymposium on Discrete Algorithms , pages 1405–1424. SIAM, 2019. 105[AMI01] E AMIR. Efficient approximation for triangulation of minimum treewidth. Proc. 17thUAI’01, San Francisco, CA, USA , pages 7–15, 2001. 3[Ami10] Eyal Amir. Approximation algorithms for treewidth. Algorithmica , 56(4):448–479,2010. 3[AMV20] Kyriakos Axiotis, Aleksander Mądry, and Adrian Vladu. Circulation control for fasterminimum cost flow in unit-capacity graphs. FOCS’2020 , 2020. 2[ASZ20] Alexandr Andoni, Clifford Stein, and Peilin Zhong. Parallel approximate undirectedshortest paths via low hop emulators. In Proceedings of the 52nd Annual ACMSIGACT Symposium on Theory of Computing , pages 322–335, 2020. 2[AW14] Amir Abboud and Virginia Vassilevska Williams. Popular conjectures imply stronglower bounds for dynamic problems. In , pages 434–443. IEEE, 2014. 1, 103[BBG + 20] Aaron Bernstein, Jan van den Brand, Maximilian Probst Gutenberg, DanuponNanongkai, Thatchaphol Saranurak, Aaron Sidford, and He Sun. Fully-dynamic graphsparsifiers against an adaptive adversary. arXiv preprint arXiv:2004.08432 , 2020. 2,4, 17 122BBV04] Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. Convex optimization .Cambridge university press, 2004. 84[BC16] Aaron Bernstein and Shiri Chechik. Deterministic decremental single source shortestpaths: beyond the o (mn) bound. In Proceedings of the forty-eighth annual ACMsymposium on Theory of Computing , pages 389–397. ACM, 2016. 2, 41, 43[BC17] Aaron Bernstein and Shiri Chechik. Deterministic partially dynamic single sourceshortest paths for sparse graphs. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms , pages 453–469. SIAM, 2017. 9, 10, 43[BC18] Aaron Bernstein and Shiri Chechik. Incremental topological sort and cycle detectionin expected total time. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Sym-posium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10,2018 , pages 21–34, 2018. 103[BDD + 16] Hans L Bodlaender, Pål GrÇ¿nås Drange, Markus S Dregi, Fedor V Fomin, DanielLokshtanov, and Michał Pilipczuk. A cˆkn 5-approximation algorithm for treewidth. SIAM Journal on Computing , 45(2):317–378, 2016. 3[Ber09] Aaron Bernstein. Fully dynamic (2+ ε ) approximate all-pairs shortest paths with fastquery and close to linear update time. In , pages 693–702. IEEE, 2009. 4, 6, 104, 105[Ber16] Aaron Bernstein. Maintaining shortest paths under deletions in weighted directedgraphs. SIAM Journal on Computing , 45(2):548–574, 2016. 9, 104[Ber17] Aaron Bernstein. Deterministic partially dynamic single source shortest paths inweighted graphs. In LIPIcs-Leibniz International Proceedings in Informatics , vol-ume 80. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2017. 2, 43[BFGT15] Michael A. Bender, Jeremy T. Fineman, Seth Gilbert, and Robert E. Tarjan. Anew approach to incremental cycle detection and related problems. ACM Trans.Algorithms , 12(2), December 2015. 103[BGHK95] Hans L Bodlaender, John R Gilbert, Hjálmtyr Hafsteinsson, and Ton Kloks. Approx-imating treewidth, pathwidth, frontsize, and shortest elimination tree. J. Algorithms ,18(2):238–255, 1995. 3, 105[BGS20] Aaron Bernstein, Maximilian Probst Gutenberg, and Thatchaphol Saranurak. Deter-ministic decremental reachability, scc, and shortest paths via directed expanders andcongestion balancing. arXiv preprint arXiv:2009.02584 , 2020. To appear at FOCS’20.103, 110, 111, 112[BGWN20] Aaron Bernstein, Maximilian Probst Gutenberg, and Christian Wulff-Nilsen. Near-optimal decremental sssp in dense weighted digraphs. Accepted to FOCS’2020 , 2020.9, 103[BHG + 21] Thiago Bergamaschi, Monika Henzinger, Maximilian Probst Gutenberg, Virginia Vas-silevska Williams, and Nicole Wein. New techniques and fine-grained hardness fordynamic near-additive spanners. Accepted to SODA’2021 , 2021. 103[BHS07] Surender Baswana, Ramesh Hariharan, and Sandeep Sen. Improved decremental al-gorithms for maintaining transitive closure and all-pairs shortest paths. Journal ofAlgorithms , 62(2):74–92, 2007. 9, 104[BK20] Sayan Bhattacharya and Janardhan Kulkarni. An improved algorithm for incremen-tal cycle detection and topological ordering in sparse graphs. In Proceedings of the ourteenth Annual ACM-SIAM Symposium on Discrete Algorithms , pages 2509–2521.SIAM, 2020. 103[Bod96] Hans L Bodlaender. A linear-time algorithm for finding tree-decompositions of smalltreewidth. SIAM Journal on computing , 25(6):1305–1317, 1996. 3[BPGS20] Aaron Bernstein, Maximilian Probst Gutenberg, and Thatchaphol Saranurak. Deter-ministic decremental reachability, scc, and shortest paths via directed expanders andcongestion balancing. Accepted to FOCS’2020 , 2020. 4, 6, 9, 14, 15[BPWN19] Aaron Bernstein, Maximilian Probst, and Christian Wulff-Nilsen. Decrementalstrongly-connected components and single-source reachability in near-linear time. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing ,pages 365–376, 2019. 9, 103[BR11] Aaron Bernstein and Liam Roditty. Improved dynamic algorithms for maintainingapproximate shortest paths under deletions. In Proceedings of the twenty-second an-nual ACM-SIAM symposium on Discrete Algorithms , pages 1355–1365. Society forIndustrial and Applied Mathematics, 2011. 1, 104[Che18] Shiri Chechik. Near-optimal approximate decremental all pairs shortest paths. In ,pages 170–181. IEEE, 2018. 4, 8, 9, 104[CHI + 16] Shiri Chechik, Thomas Dueholm Hansen, Giuseppe F Italiano, Jakub Łącki, and NikosParotsidis. Decremental single-source reachability and strongly connected componentsin o (m sqrt n) total update time. In , pages 315–324. IEEE, 2016. 9, 103[CHPQ20] Chandra Chekuri, Sariel Har-Peled, and Kent Quanrud. Fast lp-based approximationsfor geometric packing and covering problems. In Proceedings of the Fourteenth AnnualACM-SIAM Symposium on Discrete Algorithms , pages 1019–1038. SIAM, 2020. 5, 19[CK19] Julia Chuzhoy and Sanjeev Khanna. A new algorithm for decremental single-sourceshortest paths with applications to vertex-capacitated flow and cut problems. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing ,STOC 2019, pages 389–400, New York, NY, USA, 2019. ACM. 2, 3, 4, 9, 16, 17, 86[CKM + 11] Paul Christiano, Jonathan A Kelner, Aleksander Madry, Daniel A Spielman, andShang-Hua Teng. Electrical flows, laplacian systems, and faster approximation ofmaximum flow in undirected graphs. In Proceedings of the forty-third annual ACMsymposium on Theory of computing , pages 273–282, 2011. 2[CLRS09] Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Intro-duction to algorithms . MIT press, 2009. 117[CMSV17] Michael B. Cohen, Aleksander Madry, Piotr Sankowski, and Adrian Vladu. Negative-weight shortest paths and unit capacity minimum cost flow in e O ( m / log W ) time(extended abstract). In SODA , pages 752–771. SIAM, 2017. 2[Coh00] Edith Cohen. Polylog-time and near-linear work approximation scheme for undirectedshortest paths. Journal of the ACM (JACM) , 47(1):132–166, 2000. 104[CQ18] Chandra Chekuri and Kent Quanrud. Randomized mwu for positive lps. In Proceed-ings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms ,pages 358–377. SIAM, 2018. 5, 19, 89124CQT20] Chandra Chekuri, Kent Quanrud, and Manuel R Torres. Fast approximation al-gorithms for bounded degree and crossing spanning tree problems. arXiv preprintarXiv:2011.03194 , 2020. 5, 19[CS20] Julia Chuzhoy and Thatchaphol Saranurak. Deterministic algorithms for decrementalshortest paths via layered core decomposition. Accepted to SODA’2021 , 2020. 2, 3, 4,9, 17, 65, 84, 85, 86, 101, 104, 105, 112[D + 59] Edsger W Dijkstra et al. A note on two problems in connexion with graphs. Nu-merische mathematik , 1(1):269–271, 1959. 1[Dan51] George B Dantzig. Application of the simplex method to a transportation problem. Activity analysis and production and allocation , 1951. 2[DI01] Camil Demetrescu and Giuseppe F Italiano. Fully dynamic all pairs shortest pathswith real edge weights. In Proceedings 42nd IEEE Symposium on Foundations ofComputer Science , pages 260–267. IEEE, 2001. 104[DI04] Camil Demetrescu and Giuseppe F Italiano. A new approach to dynamic all pairsshortest paths. Journal of the ACM (JACM) , 51(6):968–992, 2004. 104[Din70] EA Dinic. An algorithm for the solution of the max-flow problem with the polynomialestimation. Doklady Akademii Nauk SSSR , 194(4):1277–1280, 1970. 2[DLY20] Sally Dong, Yin Tat Lee, and Guanghao Ye. A nearly-linear time algorithm for linearprograms with small treewidth: A multiscale representation of robust central path. arXiv preprint arXiv:2011.05365 , 2020. 4, 105[DS08] Samuel I Daitch and Daniel A Spielman. Faster approximate lossy generalized flowvia interior point algorithms. In Proceedings of the fortieth annual ACM symposiumon Theory of computing , pages 451–460, 2008. 2[EFGW20] Jacob Evald, Viktor Fredslund-Hansen, Maximilian Probst Gutenberg, and ChristianWulff-Nilsen. Decremental APSP in directed graphs versus an adaptive adversary. CoRR , abs/2010.00937, 2020. 4, 104[EN19] Michael Elkin and Ofer Neiman. Hopsets with constant hopbound, and applicationsto approximate shortest paths. SIAM Journal on Computing , 48(4):1436–1480, 2019.104[ES81] Shimon Even and Yossi Shiloach. An on-line edge-deletion problem. Journal of theACM (JACM) , 28(1):1–4, 1981. 1, 25, 43, 59, 103[FHL08] Uriel Feige, MohammadTaghi Hajiaghayi, and James R Lee. Improved approximationalgorithms for minimum weight vertex separators. SIAM Journal on Computing ,38(2):629–657, 2008. 3[FJF56] Lester Randolph Ford Jr and Delbert Ray Fulkerson. Solving the transportationproblem. Management Science , 3(1):24–32, 1956. 2[Fle00] Lisa K Fleischer. Approximating fractional multicommodity flow independent of thenumber of commodities. SIAM Journal on Discrete Mathematics , 13(4):505–520, 2000.2, 3, 86, 87[FLS + 18] Fedor V Fomin, Daniel Lokshtanov, Saket Saurabh, Michał Pilipczuk, and MarcinWrochna. Fully polynomial-time parameterized computations for graphs and matricesof low treewidth. ACM Transactions on Algorithms (TALG) , 14(3):1–45, 2018. 3[GHZ20] Mohsen Ghaffari, Bernhard Haeupler, and Goran Zuzic. Hop-constrained obliviousrouting. CoRR , abs/2011.10446, 2020. 2125GK07] Naveen Garg and Jochen Koenemann. Faster and simpler algorithms for multicom-modity flow and other fractional packing problems. SIAM Journal on Computing ,37(2):630–652, 2007. 2, 3, 16, 82, 84, 86[GR98] Andrew V Goldberg and Satish Rao. Beyond the flow decomposition barrier. Journalof the ACM (JACM) , 45(5):783–797, 1998. 2, 17[GT88] Andrew V. Goldberg and Robert Endre Tarjan. A new approach to the maximum-flowproblem. J. ACM , 35(4):921–940, 1988. 2[GW20] Maximilian Probst Gutenberg and Christian Wulff-Nilsen. Decremental SSSP inweighted digraphs: Faster and against an adaptive adversary. In Shuchi Chawla, edi-tor, Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA2020, Salt Lake City, UT, USA, January 5-8, 2020 , pages 2542–2561. SIAM, 2020. 9,103[GWN20] Maximilian Probst Gutenberg and Christian Wulff-Nilsen. Deterministic algorithmsfor decremental approximate shortest paths: Faster and simpler. In SODA , pages2522–2541, 2020. 2, 4, 6, 7, 8, 9, 10, 41, 43, 104, 115[GWW20] Maximilian Probst Gutenberg, Virginia Vassilevska Williams, and Nicole Wein. Newalgorithms and hardness for incremental single-source shortest paths in directedgraphs. In Symposium on Theory of Computing , 2020. 1, 103[HDLTT01] Jacob Holm, Kristian De Lichtenberg, Mikkel Thorup, and Mikkel Thorup. Poly-logarithmic deterministic fully-dynamic algorithms for connectivity, minimum span-ning tree, 2-edge, and biconnectivity. Journal of the ACM (JACM) , 48(4):723–760,2001. 115[HK95] Monika Rauch Henzinger and Valerie King. Fully dynamic biconnectivity and tran-sitive closure. In Foundations of Computer Science, 1995. Proceedings., 36th AnnualSymposium on , pages 664–672. IEEE, 1995. 104[HKM + 12] Bernhard Haeupler, Telikepalli Kavitha, Rogers Mathew, Siddhartha Sen, andRobert E. Tarjan. Incremental cycle detection, topological ordering, and strong com-ponent maintenance. ACM Trans. Algorithms , 8(1):3:1–3:33, January 2012. 103[HKN14a] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Decrementalsingle-source shortest paths on undirected graphs in near-linear total update time.In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposiumon , pages 146–155. IEEE, 2014. 1, 2, 6, 7, 8, 9, 15, 41, 43, 104[HKN14b] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Sublinear-timedecremental algorithms for single-source reachability and shortest paths on directedgraphs. In Proceedings of the forty-sixth annual ACM symposium on Theory of com-puting , pages 674–683. ACM, 2014. 103[HKN15] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Improved algo-rithms for decremental single-source reachability on directed graphs. In InternationalColloquium on Automata, Languages, and Programming , pages 725–736. Springer,2015. 103[HKN16] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Dynamic ap-proximate all-pairs shortest paths: Breaking the o(mn) barrier and derandomization. SIAM Journal on Computing , 45(3):947–1006, 2016. 4, 6, 104[HKNS15] Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and ThatchapholSaranurak. Unifying and strengthening hardness for dynamic problems via the online126atrix-vector multiplication conjecture. In Proceedings of the forty-seventh annualACM symposium on Theory of computing , pages 21–30. ACM, 2015. 1, 43[HKRL07] Mohammad Taghi Hajiaghayi, Robert D. Kleinberg, Harald Räcke, and Tom Leighton.Oblivious routing on node-capacitated and directed graphs. ACM Trans. Algorithms ,3(4):51, 2007. 2[HP19] Shang-En Huang and Seth Pettie. Thorup–zwick emulators are universally optimalhopsets. Information Processing Letters , 142:9–13, 2019. 104[IKLS17] Giuseppe F. Italiano, Adam Karczmarz, Jakub Lacki, and Piotr Sankowski. Decremen-tal single-source reachability in planar digraphs. In Hamed Hatami, Pierre McKenzie,and Valerie King, editors, Proceedings of the 49th Annual ACM SIGACT Symposiumon Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017 ,pages 1108–1121. ACM, 2017. 103[Kin99] Valerie King. Fully dynamic algorithms for maintaining all-pairs shortest paths andtransitive closure in digraphs. In , pages 81–91, 1999.104[KL19] Adam Karczmarz and Jakub Lacki. Reliable hubs for partially-dynamic all-pairs short-est paths in directed graphs. In , volume 144, page 65. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2019.104[KŁ20] Adam Karczmarz and Jakub Łącki. Simple label-correcting algorithms for partiallydynamic approximate shortest paths in directed graphs. In Symposium on Simplicityin Algorithms , pages 106–120. SIAM, 2020. 104[KLOS14] Jonathan A Kelner, Yin Tat Lee, Lorenzo Orecchia, and Aaron Sidford. An almost-linear-time algorithm for approximate max flow in undirected graphs, and its mul-ticommodity generalizations. In Proceedings of the twenty-fifth annual ACM-SIAMsymposium on Discrete algorithms , pages 217–226. SIAM, 2014. 2, 83[KPSW19] Rasmus Kyng, Richard Peng, Sushant Sachdeva, and Di Wang. Flows in almost lineartime via adaptive preconditioning. In Proceedings of the 51st Annual ACM SIGACTSymposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26,2019. , pages 902–913, 2019. 2, 105[KRV09] Rohit Khandekar, Satish Rao, and Umesh V. Vazirani. Graph partitioning usingsingle commodity flows. J. ACM , 56(4):19:1–19:15, 2009. 3, 105[KY14] Christos Koufogiannakis and Neal E Young. A nearly linear-time ptas for explicitfractional packing and covering linear programs. Algorithmica , 70(4):648–674, 2014.89[Łąc13] Jakub Łącki. Improved deterministic algorithms for decremental reachability andstrongly connected components. ACM Transactions on Algorithms (TALG) , 9(3):27,2013. 103[Li20] Jason Li. Faster parallel algorithm for approximate shortest path. In Proceedings ofthe 52nd Annual ACM SIGACT Symposium on Theory of Computing , pages 308–321,2020. 2[LN20] Jakub Lacki and Yasamin Nazari. Near-optimal decremental approximate multi-sourceshortest paths. CoRR , abs/2009.08416, 2020. 1127LRS13] Yin Tat Lee, Satish Rao, and Nikhil Srivastava. A new approach to computing maxi-mum flows using electrical flows. In Dan Boneh, Tim Roughgarden, and Joan Feigen-baum, editors, Symposium on Theory of Computing Conference, STOC’13, Palo Alto,CA, USA, June 1-4, 2013 , pages 755–764. ACM, 2013. 2[LS14] Yin Tat Lee and Aaron Sidford. Path finding methods for linear programming: Solvinglinear programs in o (vrank) iterations and faster algorithms for maximum flow. In , pages 424–433. IEEE, 2014. 2[LS20] Yang P Liu and Aaron Sidford. Faster divergence maximization for faster maximumflow. FOCS’2020 , 2020. 2[Mad10] Aleksander Madry. Faster approximation schemes for fractional multicommodity flowproblems via dynamic graph algorithms. In Proceedings of the forty-second ACMsymposium on Theory of computing , pages 121–130, 2010. 2, 5, 16, 86[Mad13] Aleksander Madry. Navigating central path with electrical flows: From flowsto matchings, and back. In Foundations of Computer Science (FOCS), 2013IEEE 54th Annual Symposium on , pages 253–262. IEEE, 2013. Available athttp://arxiv.org/abs/1307.2205. 2[Mąd18] Aleksander Mądry. Gradients and flows: Continuous optimization approaches to themaximum flow problem. 2018. 3[Pen16] Richard Peng. Approximate undirected maximum flows in o (m polylog (n)) time.In Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete al-gorithms , pages 1862–1867. SIAM, 2016. 2, 83, 100[PG20] Maximilian Probst Gutenberg. Near-Optimal Algorithms for Reachability, Strongly-Connected Components and Shortest Paths in Partially Dynamic Digraphs . PhD the-sis, University of Copenhagen, 2020. 114, 115[PGWN20] Maximilian Probst Gutenberg and Christian Wulff-Nilsen. Fully-dynamic all-pairsshortest paths: Improved worst-case time and space bounds. In Proceedings of theThirty-First Annual ACM-SIAM Symposium on Discrete Algorithms . SIAM, 2020.104[RS95] Neil Robertson and Paul D Seymour. Graph minors. xiii. the disjoint paths problem. Journal of combinatorial theory, Series B , 63(1):65–110, 1995. 3[RST14] Harald Räcke, Chintan Shah, and Hanjo Täubig. Computing cut-based hierarchi-cal decompositions in almost linear time. In Proceedings of the Twenty-Fifth An-nual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon,USA, January 5-7, 2014 , pages 227–238, 2014. 2[RZ04] Liam Roditty and Uri Zwick. On dynamic shortest paths problems. In EuropeanSymposium on Algorithms , pages 580–591. Springer, 2004. 1, 104[RZ08] Liam Roditty and Uri Zwick. Improved dynamic reachability algorithms for directedgraphs. SIAM Journal on Computing , 37(5):1455–1471, 2008. 9, 103[RZ12] Liam Roditty and Uri Zwick. Dynamic approximate all-pairs shortest paths in undi-rected graphs. SIAM Journal on Computing , 41(3):670–683, 2012. 9, 104[RZ16] Liam Roditty and Uri Zwick. A fully dynamic reachability algorithm for directedgraphs with an almost linear update time. SIAM Journal on Computing , 45(3):712–733, 2016. 104 128San05] Piotr Sankowski. Subquadratic algorithm for dynamic shortest distances. In Inter-national Computing and Combinatorics Conference , pages 461–470. Springer, 2005.103[She13] Jonah Sherman. Nearly maximum flows in nearly linear time. In , pages 263–269. IEEE, 2013.2, 83, 100[She17a] Jonah Sherman. Area-convexity, l ∞ regularization, and undirected multicommodityflow. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory ofComputing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017 , pages 452–460,2017. 2, 19[She17b] Jonah Sherman. Generalized preconditioning and undirected minimum-cost flow. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algo-rithms , pages 772–780. SIAM, 2017. 2[Shi54] Alfonso Shimbel. Structure in communication nets. In Proceedings of the symposiumon information networks , pages 119–203. Polytechnic Institute of Brooklyn, 1954. 1[ST83] Daniel Dominic Sleator and Robert Endre Tarjan. A data structure for dynamic trees. J. Comput. Syst. Sci. , 26(3):362–391, 1983. 59, 117[SW19] Thatchaphol Saranurak and Di Wang. Expander decomposition and pruning: Faster,stronger, and simpler. In SODA , pages 2616–2635. SIAM, 2019. 10, 30, 112[Tar83] Robert Endre Tarjan. Data structures and network algorithms . SIAM, 1983. 117[Tho99] Mikkel Thorup. Undirected single-source shortest paths with positive integer weightsin linear time. Journal of the ACM (JACM) , 46(3):362–394, 1999. 1[Tho05] Mikkel Thorup. Worst-case update times for fully-dynamic all-pairs shortest paths.In Proceedings of the thirty-seventh annual ACM symposium on Theory of computing ,pages 112–119. ACM, 2005. 104[TZ06] Mikkel Thorup and Uri Zwick. Spanners and emulators with sublinear distance er-rors. In Proceedings of the seventeenth annual ACM-SIAM symposium on Discretealgorithm , pages 802–809. Society for Industrial and Applied Mathematics, 2006. 104[vdBLL +