Faster Decremental Approximate Shortest Paths via Hopsets with Low Hopbound
aa r X i v : . [ c s . D S ] N ov Faster Decremental Approximate Shortest Pathsvia Hopsets with Low Hopbound
Jakub ŁąckiGoogle [email protected] Yasamin NazariJohns Hopkins University ∗ [email protected] Abstract
Given a weighted undirected graph G = ( V, E, w ) , a hopset H of hopbound β and stretch (1 + ǫ ) is a set of edges such that for any pair of nodes u, v ∈ V , there is a path in G ∪ H of atmost β hops, whose length is within a (1 + ǫ ) factor from the distance between u and v in G . Weprovide a decremental algorithm for maintaining hopsets with a polylogarithmic hopbound, witha total update time that matches the best known static algorithm up to polylogarithmic factors.Previously, the best known decremental hopset algorithm had a hopbound of ˜ O (log / n ) [HKN,FOCS’14].Our decremental hopset algorithm allows us to obtain the following improved decrementalalgorithms for maintaining shortest paths.• (1 + ǫ ) -approximate single source shortest paths in amortized update time of ˜ O ( √ log n ) .This improves super-polynomially over the best known amortized update time of ˜ O (log / n ) by [HKN, FOCS’14].• (1 + ǫ ) -approximate shortest paths from a set of s sources in ˜ O ( s ) amortized update time,assuming that s = n Ω(1) , and | E | = n . In this regime, we give the first decrementalalgorithm, whose running time matches, up to polylogarithmic factors, the best known static algorithm.• (2 k − ǫ ) -approximate all-pairs shortest paths (for any constant k ≥ , in ˜ O ( n /k ) amortized update time and O ( k ) query time. This improves over the best-known amortizedupdate time of ˜ O ( n /k ) · (1 /ǫ ) O ( √ log n ) [Chechik, FOCS’18]. Moreover, we reduce the querytime from O (log log( nW )) to a constant O ( k ) , and hence eliminate the dependence on n and the aspect ratio W . ∗ This work was conducted in part while the author was an intern at Google. Throughout this paper we use the notation ˜ O ( f ( n )) to hide factors of O ( polylog ( f ( n ))) . Introduction
A graph algorithm is called dynamic if it supports answering queries about a graph which is un-dergoing modifications, or, as we say in the following, updates . Each update is an edge deletion,insertion, or a weight change. In this paper, we are interested in designing decremental algorithmsfor distance problems in weighted graphs. In the decremental setting, the updates are only edgedeletions or weight increases. This is as opposed to an incremental setting in which edges can beinserted, or a fully dynamic setting, in which we have both insertions and deletions.We study the fundamental problem of maintaining shortest paths from a fixed set S of sources.We consider different variants of the problem which differ in the size of S : the single-source shortestpaths (SSSP) problem ( | S | = 1 ), all-pairs shortest paths (APSP) problem ( S = n , where n isthe number of vertices of the input graph), as well as the multi-source shortest paths (MSSP)problem ( S is of arbitrary size), which is a generalization of the previous two. Specifically, given aweighted graph G = ( V, E, w ) , we want to support the following operations: Delete ( ( u, v ) ), where ( u, v ) ∈ E , which removes the edge ( u, v ) , Distance ( s, u ), which returns an (approximate) distancebetween a source s and any u ∈ V , and Increase ( ( u, v ) , δ ), which increases the weight of the edge ( u, v ) by δ > .The best known algorithm for maintaining exact distances under deletions takes O ( mn ) totalupdate time [SE81, Kin99], even if we limit ourselves to unweighted and undirected graphs. In fact,this bound matches a widely believed conditional lower bound [HKNS15]. Hence a large body ofwork [Ber09, BR11, Che18, HKN14a, RZ12, HKN14b, HKN17] focused on maintaining approximate distances. Allowing approximate distances enabled significant speedups in the running time.Following this line of work, we provide efficient decremental algorithms for maintaining (1 + ǫ ) -approximate SSSP and MSSP and (2 k − ǫ ) -approximate APSP in weighted undirected graphs .All of our results are based on a new algorithm for decrementally maintaining a hopset with smallhopbound. Given a weighted undirected graph G = ( V, E, w ) , a hopset H of hopbound β and stretch (1 + ǫ ) (or, a ( β, ǫ ) -hopset) is a set of edges such that for any pair of nodes u, v ∈ V , there is apath in G ∪ H of at most β hops, whose length is within a (1 + ǫ ) factor from the distance between u and v in G (see Definition 1 for a formal statement).Hopsets, originally defined by [Coh00], are widely used in distance related problems in vari-ous settings, such as parallel shortest path computation [Coh00, MPVX15, EGN19, EN19b], dis-tributed shortest path computation [EN19a, Nan14], routing tables [EN18] and distance sketches[EN18, DN19]. In addition to their direct applications, hopsets have recently gained more attention(e.g. [BLP20, EN19a, ABP18, HP19]) as a fundamental object closely related to several other funda-mental objects such as additive (or near-additive) spanners and emulators [EN20]. While hopsetsare extensively studied in other models of computation (e.g. distributed and parallel settings), theirapplicability in dynamic settings is still not very well-understood. The few exceptions include, utiliz-ing hopsets in the state-of-the art decremental SSSP algorithm for undirected graphs by Henzinger,Krinninger and Nanongkai [HKN14a], and implicit hopsets considered in [Ber09, Che18]. We usesome of the hopset techniques from parallel and distributed settings, and show that they can beuseful for obtaining efficient dynamic algorithms.One limitation of hopsets is that their utility is limited for sparse graphs. Existential lowerbounds imply that when ǫ < a hopset with polylogarithmic hopbound must have size at least n [ABP18]. This is why for very sparse graphs we cannot hope to obtain ˜ O ( m ) -time algorithmsbased on hopsets. However when the graph is slightly denser ( | E | = n ), ǫ is larger ( ǫ ≥ ) [BLP20, EGN19], or we use a slightly super-logarithmic hopbound, the limitation no longerapplies. In this paper we take advantage of this fact and use hopsets to give improved algorithmsfor multiple decremental shortest paths problems in undirected graphs.1 .1 Our Results Decremental Hopsets.
The main technical component of our result is a new decremental hopsetalgorithm. Formally we show the following.
Theorem 1.
Given an undirected graph G = ( V, E ) with polynomial weights , subject to edgedeletions, we can maintain a ( β, ǫ ) -hopset of size ˜ O ( n k − ) in total expected update time ˜ O ( βǫ · ( m + n k − ) n ρ ) , where β = O ( log nǫ · ( k + 1 /ρ )) k +1 /ρ +1 , < ǫ < and k − < ρ < . Note that the above algorithm, as well as all our results, which use the above construction, arerandomized and work against an oblivious adversary.Our decremental hopset construction covers a wide range of time/hopbound tradeoffs. Impor-tantly, by setting ρ to be a constant, we get the first hopset with polylogarithmic hopbound, witha total update time of ˜ O ( mn ρ ) which matches (up to polylogarithmic factors) the running timeof the best known static algorithm [EN19b, EN19a] for computing a hopset with polylogarithimichopbound and (1 + ǫ ) stretch.In the decremental setting, to the best of our knowledge, the state-of-the art hopset construc-tion [HKN14a] has a hopbound of ˜ O (log / n ) , and can be maintained with ˜ O (log / n ) amortizedupdate. By setting ρ = log log n √ log n , we can maintain a hopset with hopbound ˜ O ( √ log n ) in ˜ O ( √ log n ) amortized time. Thus, compared to [HKN14a] we improve both the hopbound and the amortizedupdate time.The starting point of our decremental hopset algorithm is a static hopset construction by[EN19b]. However, since computing this hopset in the static setting requires computing potentiallylong shortest paths, it is not clear how to efficiently maintain this hopset in the decremental setting.To deal with that, we construct a new hopset that combines some of the properties of [EN19b]with various dynamic tools. Specifically, to compute our hopset, it suffices to run a number ofsingle-source shortest paths computations up to small depth in a sequence of graphs that we builditeratively. We provide an overview of this construction in Section 2. SSSP.
By using our decremental hopset construction, we obtain an algorithm for decrementalsingle-source shortest paths.
Theorem 2.
Given an undirected and weighted graph G = ( V, E ) , there is data structure formaintaining (1 + ǫ ) -approximate distances from a source s ∈ V under edge deletions, where <ǫ < is a constant and | E | = n · ˜Ω( √ log n ) . The total expected update time of the data structure is ˜ O ( m · ˜ O ( √ log n ) ) , and the query time is O (1) . The amortized update time of our algorithm over all m deletions is ˜ O ( √ log n ) . This improvesupon the state-of-the art algorithm of [HKN14a], whose amortized update time is ˜ O ( √ log n ) . Whilethe improvement is only by a n o (1) factor, it is super-polynomial , since for any constant c > , ˜ O ( √ log n ) = O ((2 ˜ O (log / n ) ) c ) . MSSP.
Our next result is a near-optimal algorithm for multi-source shortest paths.
Theorem 3 (MSSP) . There is a data structure which given a weighted undirected graph G = ( V, E ) explicitly maintains (1 + ǫ ) -approximate distances from a set of s sources in G under edge deletions.Assuming that | E | = n and s = n Ω(1) , the total expected update time is ˜ O ( sm ) . The datastructure is randomized and works against an oblivious adversary. If weights are not polynomial a factor the log n factor will be replaced with log W , and a factor of log W will beadded to the update time. static algorithm for computing (1 + ǫ ) -approximate distances from s sources for a wide rangeof graph densities. While for very dense graphs, using algorithms based on fast matrix multiplicationis faster, the running time of our decremental algorithm matches the best known results in the staticsettings (up to polylogarithmic factors) whenever ms = n δ , for a constant δ ∈ (1 , . .In the dynamic setting, our algorithm improves upon a solution obtained by running the algo-rithm of Henzinger, Krinninger and Nanongkai [HKN14a] independently from each source, giving atotal update time of O ( sm · ˜ O (log / n ) ) . The advantage of our algorithm is that it decrementallymaintains a hopset of polylogarithmic hopbound in mn o (1) time, which then allows it to maintainapproximate SSSP in ˜ O ( m ) time. In contrast, the algorithm of [HKN14a] maintains a hopset ofhopbound ˜ O (log / n ) , which, if one simply applies existing techniques, results in a total update timeof m ˜ O ( √ log n ) . In the general case, i.e., for sparse graphs, the update bound of our algorithm is sm ˜ O ( √ log n ) , which is still better than the bound obtained by [HKN14a]. APSP.
Finally, we show that by maintaining both a hopset and a Thorup-Zwick distance oraclewe can get the following tradeoffs for approximating distance between any pair of nodes.
Theorem 4 (Approximate APSP) . For any constant integer k ≥ , there is a data structure thatcan answer (2 k − ǫ ) -approximate distance queries in a given a weighted undirected graph G = ( V, E, w ) subject to edge deletions. The total expected update time over any sequence of edgedeletions is ˜ O ( mn /k ) and the expected size of the data structure is ˜ O ( m + n /k ) . Each query forthe distance between two vertices is answered in O ( k ) worst-case time. The currently best known bound for decremental APSP with a similar stretch is due to Chechik [Che18].The total update time in [Che18] is ˜ O ( mn /k )(1 /ǫ ) O ( √ log n ) , and the query time is O (log log( nW )) ,where W is the largest weight. Our update time improves over this bound by eliminating the (1 /ǫ ) √ log n . Note that the improvement in the running time holds for constant k . When k = ω (1) ,the running time of our algorithm roughly matches the one obtained in [Che18]. Our results matchthe best known static algorithm with the same tradeoff (up to (1 + ǫ ) in the stretch and polylog intime) by Thorup-Zwick [TZ05]. In addition, the query time of our algorithm is independent of n orthe aspect ratio (the ratio between largest and smallest edge weight).Prior to [Che18], Roditty and Zwick [RZ04] gave an algorithm for maintaining Thorup-Zwickdistance oracles in total time ˜ O ( mn ) , stretch (2 k − ǫ ) and O ( k ) query time for unweightedgraphs . Later on, Bernstein and Roditty [BR11] gave a decremental algorithm for maintainingThorup-Zwick distance oracles in O ( n /k + o (1) ) time using emulators also only for unweightedgraphs . Most of previous works on dynamic distance computation are based on algorithms constructing asparse emulator (e.g. [Ber09, BR11, Che18]). For a graph G = ( V, E ) , an emulator H ′ = ( V, E ′ ) is agraph such that for any pair of nodes x, y ∈ V , there is a path in H ′ that approximates the distancebetween x and y on G (possibly with both multiplicative and additive factors). More importantly,the efficient dynamic algorithms for maintaining emulators and hopsets have some significant differ-ences. At a high-level, an emulator approximates distances without using the original graph edgesand hence we can restrict the computation to a sparser graph, whereas for using and maintaininghopsets we need to use the edges in the original graph as well. On the other hand, hopsets allow for The k here should not be confused with the parameter k in the hopset size. Given a weighted undirected graph G = ( V, E, w ) , and a pair u, v ∈ V we denote the (weighted)shortest path distance by d G ( u, v ) . We denote by d ( h ) G ( u, v ) the length of the shortest path between u and v among the paths that use at most ℓ -hops, and call this the h -hop limited distance between u and v . Definition 1.
Let G = ( V, E, w ) be a weighted undirected graph. Fix d, ǫ > and an integer β ≥ . A ( d, β, ǫ ) - hopset is a graph H = ( V, E ( H ) , w H ) , such that:• For each ( u, v ) ∈ E ( H ) , w H ( u, v ) ≥ d G ( u, v ) .• For each u, v ∈ V , such that d G ( u, v ) ≤ d , we have d ( β ) G ∪ H ( u, v ) ≤ (1 + ǫ ) d G ( u, v ) .We say that β is the hopbound of the hopset and ǫ is the stretch of the hopset. We also use ( β, ǫ ) -hopset to denote a ( ∞ , β, ǫ ) -hopset. Finally, for any finite d , we say that a ( d, β, ǫ ) -hopset is a d - restricted hopset .In analyzing dynamic algorithms we sometimes also use a time subscript t to denote a distance(or a weight) after the first t updates. In particular we use d t,G ( u, v ) to denote the distance between u and v after t updates, and similarly use d ( h ) t,G ( u, v ) to denote h -hop limited distance between u and v at time t . The starting point of our algorithm is a known static hopset construction [EN19b, HP19]. We firstreview this construction. As we shall see, maintaining this data structure dynamically directlywould require update time of up to O ( mn ) . We therefore give another hopset construction thatcaptures some of the properties of the hopsets of [EN19b, HP19], but can be maintained efficientlyin a decremental setting. We then explain how by hierarchically maintaining a sequence of datastructures we can obtain a near-optimal time and stretch tradeoff dynamically. In this section we outline the (static) hopset construction of Elkin and Neiman [EN19b] (which issimilar to [HP19]). We will later explain how we can make modifications that allows us to maintaina similar hopset dynamically.Given a weighted graph G = ( V, E, w ) , an integer ≤ k ≤ log log n and ρ > , we show theconstruction a ( β, ǫ ) -hopset of size O ( n k − ) and hopbound β = O (( k +1 /ρ +1 ǫ ) k +1 /ρ +1 ) .We define sets V = A ⊇ A ⊇ ... ⊇ A k +1 /ρ +1 = ∅ . Let ν = k − . Each set A i +1 is obtainedby sampling each element from A i with probability q i = max( n − i · ν , n − ρ ) , where ρ is a parameterthat determines a tradeoff between hopbound and running time.Fix ≤ i ≤ k +1 /ρ +1 . Then, for every vertex u ∈ A i \ A i +1 , let p ( u ) ∈ A i +1 be the node of A i +1 ,which is closest to u . We define a bunch of u to be a set B ( u ) := { v ∈ A i : d ( u, v ) < d ( u, A i +1 ) } . In [EN19b] two algorithms with different sampling probabilities are given, where one removes a factor of k in thesize. This factor does not impact our overall running time, so we will use the simpler version. C ( v ) , called the cluster of v ∈ A i \ A i +1 , defined as C ( v ) = { u ∈ V : d ( u, v ) Let u ∈ C ( v ) , and let z ∈ V be on a shortest path between v and u . Then z ∈ C ( v ) .Proof. Let v ∈ A i . If z C ( v ) then by definition d ( z, A i +1 ) ≤ d ( v, z ) . On the other hand, since z ison the shortest path between u and v : d ( u, A i +1 ) ≤ d ( z, u ) + d ( z, A i +1 ) ≤ d ( u, z ) + d ( z, v ) = d ( u, v ) ,which contradicts the fact that u ∈ C ( v ) .As we will see, this property is important for bounding the running time. The hopset is thenobtained by adding an edge ( u, v ) for each u ∈ A i \ A i +1 and v ∈ B ( u ) ∪ { p ( u ) } , and setting theweight of this edge to be d ( u, v ) . These distance can be computed by maintaining the clusters. Aswe will see in maintaining the clusters (rather than bunches) we scan more edges than what is storedin the hopset. Hence the update time of our dynamic algorithms is determined by the number ofclusters a node belongs to, rather than the size of the hopset. This is because unlike an emulator,for maintaining the distances using a hopset, we also need to consider the edges in G , and the smallhopbound is the key to efficiency rather than sparsity. Theorem 5 ( [EN19b]) . There is an algorithm that given a weighted and undirected graph G = ( V, E ) ,and ≤ k ≤ log log n − , k − < ρ < computes a ( β, ǫ ) -hopset of size O ( n k − ) , where β = O (( k +1 /ρǫ ) k +1 /ρ +1 ) . It runs in O ( n ρ ρ ( m + n log n )) expected time. We do not directly use this static construction, but since we use its properties, we sketch aproof of the hopset properties in Appendix A. One important component of this algorithm is the modified Dijsktra’s algorithm that we will also utilize in our dynamic algorithms. This algorithmwas proposed by Thorup-Zwick [TZ05] and it allows us to construct the bunches and clusters forlevel i in O (( m + n log n ) /q i ) (expected) time, where q i is the subsampling probability used forconstructing the sets A i . Elkin and Neiman [EN19b] proposed a construction where q i ≥ n − ρ ,which allowed them to bound the running time of computing bunches and clusters. Similar ideascombined with a dynamic algorithm by [RZ04] will let us bound the update time for maintainingthe clusters.The hopset of [EN19b] has some structural similarities with emulators of [TZ06]. One maindifference, as we discussed, is that the sampling probabilities are adjusted (bounded by n − ρ ) toallow for efficient construction of these hopsets in various models, at the cost of slightly weakersize/hopbound tradeoffs. We also need these adjustment for our efficient decremental algorithms. Before we give our full hopset construction, we show how we can construct a d -restricted hopset,i.e. a hopset that guarantees hopbounded paths only between nodes within distance d . We then usethis algorithm to construct a sequence of d -restricted hopsets for exponentially increasing valuesof d , at each step using the hopsets constructed so far. In order to maintain a d -restricted hopsetdynamically, we start with a decremental algorithm of Roditty and Zwick [RZ04]. Their techniquesallow us to maintain the clusters and bunches as defined in Section 2.1. However we need to modify5heir algorithm in several ways. The first modification is adjusting sampling probabilities to matchthe probabilities we gave in Section 2.1. Note that while the clusters we consider are slightly differentthan what was used in [RZ04] (even if we ignore the difference in sampling probabilities), we use a subset of the clusters defined in [RZ04] uses.By extending their algorithm and analysis to our setting, we can maintain a d -restricted hopsetdecrementally in ˜ O ( dmn ρ ) total time, where < ρ < is a parameter that balances the tradeoffbetween the hopbound and time as discussed in Section 2.1. This means we can efficiently maintaina d -restricted hopset, when d is small. However, for large d , such running time is prohibitive.The main new technical component of our construction is providing a hierarchical algorithmthat iteratively constructs restricted hopsets on a sequence of scaled graphs . Next, we explain thishierarchical construction. Path doubling. Our algorithm maintains a sequence of graphs H , . . . , H log W with the followingproperty. For each ≤ j ≤ log W , S jr =0 H r is a (2 j , β, (1 + ǫ ) j ) -hopset of G . Note that for ≤ j ≤ log β we can set H j = ∅ , since G covers these scales (w.l.o.g the weights in G are at least1, so if d G ( u, v ) ≤ β , there is a shortest path between u and v of at most β hops). To maintain thegraphs H i , we prove the following lemma. Lemma 6. Consider a graph G = ( V, E, w ) subject to edge deletions. Assume that we have main-tained ¯ H j := H , ..., H j , which is a (2 j , β, (1 + ǫ ) j ) -hopset of G . Then, there is a data structure,that given the sequence of changes to G and ¯ H j , maintains a graph H j +1 , such that ¯ H j ∪ H j +1 is a (2 j +1 , β, (1 + ǫ ) j +1 ) -hopset of G .The data structure can be maintained in ˜ O (( m + ∆) n ρ · βǫ ) total time, where m is the initialsize of G , ∆ is the total number of edges inserted to ¯ H j over all updates, β = ( ǫ · ρ ) O (1 /ρ ) , and < ǫ < , ρ < are parameters. Note that the lemma does not hold for any restricted hopset, and we need to use special prop-erties of our construction to prove this.In our construction we use G ∪ j − r =0 H r to construct H j . Note that by our assumption it sufficesif H j is a hopset for paths of length in the range [2 j − , j ) , since shorter paths are already takencare of by H , . . . , H j − . The important observation is that each path π of length ∈ [2 j − , j ) in G can be approximated (within a (1 + ǫ ) j − factor) by a path of β + 1 hops in G ∪ j − r =0 H r . Thisfollows from the fact that any such π can be obtained by concatenating paths π , π and π , where π and π have length at most j − (so we can apply the property of a j − -restricted hopset) and π consists of a single edge. Hence, a subproblem that we need to solve for each distance scale [2 j − , j ) is computing a hopset for distances between [2 j − , j ) , knowing that the length of eachsuch shortest path in G can be approximated by a path in G ∪ j − r =0 H r consisting of at most β + 1 hops.The path doubling idea has been used in hopset constructions in distributed/parallel models(e.g. [Coh00, EN19a, EN19b]), but to the best of our knowledge this is the first use of this approachin a dynamic setting. While implementing using this idea in parallel/distributed settings is relativelystraight-forward, it is not immediately clear how to utilize this in dynamic settings. To do this weneed to maintain the hopsets on a sequence of scaled graphs. We first review a scaling idea andthen define this sequence. Scaling. We review a scaling algorithm widely used in dynamic settings (e.g. [Ber09, BR11,HKN14a] repeatedly and iteratively during the process of adding hopset edges.6his idea can summarized in the following scaling scheme due to Klein and Subramanian [KS97],which, roughly speaking, says that finding shortest paths of length ∈ [2 j − , j ) and at most ℓ hops,can be (approximately) reduced to finding paths of length at most O ( ℓ ) in a graph with in integralweights. This is done by a rounding procedure that adds a small additive factor ǫ w ( e ) ℓ to each edge e . Then for a path of ℓ hops the overall stretch will be (1 + ǫ ) . Lemma 7 ( [KS97]) . Let G = ( V, E, w ) be a weighted undirected graph. Let R ≥ and ℓ ≥ beintegers and ǫ > . We define the scaled graph to be a graph Scale ( G, R, ǫ , ℓ ) := ( V, E, ˆ w ) , suchthat ˆ w ( e ) = ⌈ w ( e ) η ( R,ǫ ) ⌉ , where η ( R, ℓ ) = ǫ Rℓ .Then, for each edge e ∈ E we have ˆ w ( e ) ≤ w ( e ) + ǫ R , and for any path π in G such that π hasat most ℓ hops and weight R ≤ w ( π ) ≤ R , we have • ˆ w ( π ) ≤ ⌈ ℓ/ǫ ⌉ , • w ( π ) ≤ η ( R, ǫ ) · ˆ w ( π ) ≤ (1 + ǫ ) w ( π ) . By using the above scaling in the construction of the data structure of Lemma 6, we can ef-fectively reduce the problem that the data structure is solving to the problem of maintaining a O ( β ) -restricted hopset in a graph with integral weights in a dynamic setting. Note that we set β = poly log n . Since there are edge insertion into the hopset, and hence each scaled graph, thereare further challenges in how these ideas can be combined in decremental settings. We will explainlater that for handling edge insertion we need to use another data structure called the monotoneES tree ( [HKN16]). We need to show that by combining the estimates from these different datastructures we still get a hopset with the desired properties. Handling insertions. While the algorithm of Roditty and Zwick [RZ04] only works in the decre-mental setting, in our case we need to extend it to handle edge insertions . This is because we run iton a graph G ∪ S j − r =0 H r (after applying scaling of Lemma 7). While edges of G can only be deleted,new edges may be added to some hopsets H r .We deal with this issue as follows. The algorithm of Roditty and Zwick [RZ04] decrementallymaintains a collection of single-source shortest path trees (up to a bounded depth) using the Even-Shiloach algorithm (ES-tree) [SE81]. We modify the algorithm by effectively replacing each ES-tree, by a monotone ES-tree proposed by [HKN14a, HKN16]. The monotone ES-tree, in additionto supporting edge deletions, also supports edge insertion operation in a limited way. Namely,whenever an edge ( u, v ) is inserted and the insertion of the edge causes a distance decrease in thetree, we do not update the currently maintained distance estimates. This change keeps the runningtime roughly the same as in the decremental setting.The main challenge here lies in analyzing the hopset stretch. While [HKN14a] analyzed thestretch incurred by running monotone ES-trees on a hopset, the proof relied on the properties ofthe specific hopset used in their algorithm. Since the hopset we use is quite different, we need adifferent analysis, which combines the static hopset analysis, with the ideas used in [HKN14a], andalso take into account the stretch incurred due to the fact that the restricted hopsets are maintainedon the scaled graphs. In our final algorithm, we need to run this restricted hopset algorithm on thesequence of scaled graphs, so that we can utilize the smaller scale hopsets in a hierarchical way toget our improved update time. Putting it together. We now go back to the setting of Lemma 6. Given a j -restricted hopset ¯ H j = H ∪ ... ∪ H j for distances up to j , we can now construct a graph G j by applying the scaling ofLemma 7 to G ∪ ¯ H j and setting R = 2 j , ℓ = 2 β + 1 . Then we can efficiently maintain an ℓ -restricted7opset on G j . Then by Lemma 6 we can use this to update H j +1 . Importantly, ℓ is independent of R , and thus we can eliminate the factor R to get ˜ O ( βmn ρ ) total update time.Our final algorithm is a hierarchical construction that maintains the restricted hopsets on scaledgraphs and the original graph simultaneously. Since we are maintaining hopsets on scaled graphs,we will lose small factors in the stretch, but we can show that this has little impact on our overallhopbound/update time tradeoff. For obtaining our near-optimal time and hopbound tradeoff, weneed to carefully combine the ideas described and show that the monotone ES tree ideas can beapplied to these specific insertions.We rely on a threefold inductive construction and analysis that combines the pieces we havedescribed.1. An induction on the i , the iterations of the base hopset, which controls the hopset tradeoffs.2. An induction on j , which allows us to cover all ranges of distances [2 j , j +1 ] by maintainingdistances in the appropriate scaled graphs.3. An induction on time t that allows us to handle insertions by using the estimates from previousupdates in order to keep the distances monotone.The overall stretch argument needs to deal with several error factors in addition to the basehopset stretch. First, the error incurred by using hopsets for smaller scales, which we deal withby maintaining our hopsets by setting ǫ ′ = ǫ log n . This introduces polylogarithmic factors in thehopbound. The second type of error comes from the fact that the restricted hopsets are maintainedfor scaled graphs, which implies the clusters are only approximately maintained on the originalgraph. This can also be resolved by further adjusting ǫ ′ . Finally, since we use an idea similar to themonotone ES tree of [HKN14a, HKN16], we may set the level of nodes in each tree is to be largerthan what it would be in a static hopset. But we argue that the specific types of insertions in ouralgorithm will still preserve the stretch. At a high-level this is because in case of a decrease we usean estimate from time t − , which we can show inductively has the desired stretch.We note that while the use of monotone ES tree and the structure of the clusters in our con-struction are similar to [HKN14a], our algorithm has several important differences. Other thanusing a different and more general base (static) hopset, we use a different approach to maintain thehopset efficiently by using path doubling and maintaining restricted hopsets on the scaled graphs .Among other things, in [HKN14a] a different notion of approximate ball is used that is rather morelossy with respect to the hopbound/stretch tradeoffs. By maintaining restricted hopsets on scaledgraphs, we are also effectively preserving approximate balls in the original graph, but as explainedabove the error accumulation combines nicely with the path-doubling idea.Finally, [HKN14a] uses an edge sampling idea to bound the update time, which we can avoidby utilizing the sampling probability adjustments in [EN19b], and the ideas in [RZ04]. Our algorithms for maintaining approximate distances under edge deletions are as follows. First,we maintain a ( β, ǫ ) -hopset. Then, we use the hopset and Lemma 7 to reduce the problem tothe problem of approximately maintaining short distances from a single source. For our applicationin MSSP and APSP the best update time is obtained by setting the hopbound to be polylgarithmicwhereas for SSSP the best choice for is β = 2 ˜ O ( √ log n ) . Using this idea for SSSP and MSSP mainlyinvolves using the monotone ES tree ideas described earlier. Maintaining the APSP distance oracle This is also on reason we get an improvement in amortized single-source shortest path update time. 8s slightly more involved but uses the same techniques as in our restricted hopset algorithm. Thisalgorithm is based on maintaining Thorup-Zwick distance oracle [TZ05] more efficiently. At a high-level, we maintain both a ( β, ǫ ) -hopset and Thorup-Zwick distance oracle simultaneously, andbalance out the time required for these two algorithms. The hopset is used to improve the timerequired for maintaining the distance oracle from O ( mn ) (as shown in [RZ04]) to O ( βmn /k ) , butwith a slightly weaker stretch of (2 k − ǫ ) . Querying distances is then the same as in the staticalgorithm of [TZ05], and takes O ( k ) time. In this section we provide our decremental hopset algorithms. Our goal is to implement the hopsetalgorithm described in Section 2.1 dynamically. In Section 3.2, we explain how we can adapt ideasby Roditty-Zwick [RZ04] to obtain an algorithm for computing a d -restricted hopset. The totalrunning time of this algorithm is O ( dmn ρ ) (where ρ < is a constant), which is undesirable forlarge values of d . We will then improve the running time to ˜ O ( mn ρ ) using scaling and path-doubling ideas. Recall that our algorithm maintains a sequence of graphs H , . . . , H log W , wherefor each ≤ j ≤ log W , H ∪ . . . ∪ H j is a j -restricted hopset of G . Instead of computing each H j separately, we use G ∪ S j − r =0 H r to construct H j . We observe that at the cost of some smallapproximation errors, any path of length ∈ [2 j − , j ) in G can be approximated by a path of atmost β + 1 hops in G ∪ S j − r =0 H r . To use this idea we will prove the following main lemma as abuilding block for our final hopset. Lemma 6. Consider a graph G = ( V, E, w ) subject to edge deletions. Assume that we have main-tained ¯ H j := H , ..., H j , which is a (2 j , β, (1 + ǫ ) j ) -hopset of G . Then, there is a data structure,that given the sequence of changes to G and ¯ H j , maintains a graph H j +1 , such that ¯ H j ∪ H j +1 is a (2 j +1 , β, (1 + ǫ ) j +1 ) -hopset of G .The data structure can be maintained in ˜ O (( m + ∆) n ρ · βǫ ) total time, where m is the initialsize of G , ∆ is the total number of edges inserted to ¯ H j over all updates, β = ( ǫ · ρ ) O (1 /ρ ) , and < ǫ < , ρ < are parameters. There are two main challenges that we need to address for proving this lemma. First, we wouldlike to make the running time independent of the scale bound j , which is what we would getby directly using the algorithm of [RZ04]. To that end, we are going to run our algorithm on arescaled graph, which would allow us to only maintain distances up to depth O ( β/ǫ ) . This relies onhaving the j -restricted hopset ¯ H j , which allows us to maintain the hopset ¯ H j +1 . Second, while G is undergoing deletions, H j may be undergoing edge insertions . In Section 3.1 we explain how suchinsertions can be handled using the monotone ES tree algorithm (based on [HKN14a]). In Section3.3 we use the properties of this algorithm to prove Lemma 6.We first start by showing how we can maintain distances from a single-source. Then we extendthis algorithm to maintain the clusters and hence the bunches B ( u ) for all u ∈ V , which gives us arestricted hopset for each distance scale. In this section, we explain the monotone ES tree idea and how it can be used for maintaining single-source shortest path up to a given depth D . In Section 3.2 we explain how this idea can be usedin maintaining a restricted hopset. Using the monotone ES tree ideas may impact the stretch, andclearly do not apply to all types of insertions but only for insertion of certain structural properties.9n Section 3.3, we will prove that specifically for the insertions in our restricted hopset algorithmthe stretch guarantee holds. We show how to handle edge insertions by using a variant of themonotone ES-tree algorithm [HKN14a] (and further used in the hopset construction of [HKN16]).This algorithm is given as Algorithm 1. The idea in a monotone ES tree is that if an insertion ofan edge ( u, v ) causes the level of a node v to decrease, we will not decrease the level. In this casewe say the edge ( u, v ) and the node v are stretched . More formally, a node v is stretched when L ( v ) > min ( x,v ) ∈ E L ( x ) + w ( x, v ) .We observe multiple properties of the monotone ES tree algorithm as observed by [HKN14a,HKN16] that will be helpful in analyzing the stretch later:• The level of a node never decreases.• Only an inserted edge can be stretched.• While an edge is stretched, its level remains the same. In other words, a stretched edge is notgoing to get stretched again unless it is deleted (or get a distance increase).Also observe that we never underestimate the distances. This is clearly true for any edge weightsobtained by the rounding in Lemma 7. It is also easy to see this is true for the stretched edgesfor the following reason: For any node v , the algorithm maintains the invariant that L ( s, v ) ≥ min ( x,v ) ∈ E L ( s, x ) + w ( x, v ) . In other words, L ( s, v ) is either an estimate based on rounding that isat least d G ( s, v ) or it is larger than such an estimate. Algorithm 1: Maintaining a monotone ES tree up to depth D on G . Note that edgedeletion can be achieved by setting the edge weight to ∞ . Function Init ( G, s, D ) E := E ( G ) ∪ { e v = ( s, v ) : v ∈ V ( G ) \ { s } , w ( e v ) = D + 1 } /* This ensures thatdistances are maintained up to level D */ for v ∈ V do L ( s, v ) := 0 for v ∈ V do Update ( T ( s ) , v ) Function InsertEdge ( T ( s ) , ( a, b ) , c ) /* Insert an edge in the tree rooted at s */ E := E ∪ { ( a, b ) } w ( a, b ) := c Update ( T ( s ) , b ) Function Update ( T ( s ) , v ) upd := min ( x,v ) ∈ E L ( s, x ) + w ( x, v ) if v = s or L ( s, v ) ≥ upd then /* Node v is stretched. */ return L ( s, v ) := upd for ( v, y ) ∈ E ( G ) do Update ( T ( s ) , y ) 10 emma 8. Algorithm 1 processes any sequence of updates in O (( m + ∆) D ) overall update time ona graph with m edges, where ∆ is the number of edge insertions.Proof sketch. The running time analysis of the algorithm follows based on an argument similar tothe analysis of the classic ES tree algorithm [SE81, Kin99]. The total time for updating distancesup to a depth D is O (( m + ∆) D ) , roughly speaking, since the edges incident to each node v arescanned any time level of each node changes. Since distances can only increase there can be at most D times for nodes with depth at most D to the source. Furthermore, ∆ is the number of addededges that also need to be scanned in each update. By summing over all edges incident to all nodesthe claim follows. In this section our goal is to maintain a decremental restricted hopset. We start by adapting thedecremental algorithm by [RZ04] that maintains the Thorup-Zwick distance oracles [TZ05] withstretch (2 k − for pairs of nodes within distance d in ˜ O ( dmn /k ) total time. We make threemodifications. First, we change the sampling probabilities based on the hopset algorithm describedin Section 2.1. Second, in addition to computing clusters we also construct the hopset (as outlinedin Section 2.1 as well). Third, we use the monotone ES tree ideas described in Section 3.1 to handleinsertions.We argue that the algorithm of [RZ04] can be extended to construct a ( d, β, ǫ ) -hopset. Wewill then run this algorithm on a sequence of scaled graphs, and show how this can be done efficientlyusing the previously added hopset edges.At a high-level, the idea is to run Even-Shiloach [SE81] trees for each node u ∈ A i \ A i +1 tocompute B ( u ) . In order to bound the running time, we use the same modification as in the modifiedDijkstra algortithm [RZ04], which allows us to bound the number of shorest path explorations thatvisit each node. However there are few challenges in using these ideas. First, while in the staticalgorithm each node overlaps with at most ˜ O ( n ρ ) clusters, in dynamic settings this holds at anypoint in time but does not immediately hold for a sequence of updates, since nodes keep on changingclusters. Also, it is not enough to maintain Even-Shiloach trees from all nodes w ∈ A i \ A i +1 onclusters C ( w ) , since after a deletion a node may join a new cluster and hence a node may leave orjoin a tree. Roditty and Zwick [RZ04] provided an algorithm that handles this issue of nodes leavingor joining. By extending their techniques we can maintain the clusters and hence bunches and thehopset edges. Later when we use this algorithm for obtaining an efficient algorithm for maintaininga sequence of restricted hopsets, there will be other edges insertion (from smaller scales) that alsoneed to be handled. For that type of insertion we need to use a different approach that is similar tothe monotone ES tree ideas in [HKN14a]. Next, we sketch the algorithm and analysis of [RZ04] andexplain how it can be extended for maintaining a restricted hopset. More details of the algorithmcan be found in Appendix B.We sample sets V = A ⊇ A ⊇ ... ⊇ A k +1 /ρ +1 = ∅ initially. The sets remain unchanged duringthe updates. Next, we need to maintain values d ( v, A i ) , ≤ i ≤ k + +1 /ρ + 1 for all nodes v ∈ V .This can be performed by computing a shortest path tree (using Algorithm 1) rooted at a dummynode s i connected to all nodes in A i . We denote the estimate obtained by maintaining this distanceby L ( v, A i +1 ) Let ˆ d = (1 + ǫ ) d . We can use the Even-Shiloach [SE81] algorithm up to depth ˆ d tomaintain all these distances in O ( ˆ dm ) time. The pivots p ( v ) , ∀ v ∈ V can also be maintained in thisprocess. Maintaining the clusters. Recall that for z ∈ A i \ A i +1 we have v ∈ C ( z ) if and only if d ( z, v ) < d ( v, A i +1 ) . After each deletion, for each node v and the cluster centers z we first check11hether the distance estimate L ( z, v ) has increased. If L ( z, v ) ≥ L ( v, A i +1 ) , v will be removed from C ( z ) . The more subtle part is adding nodes to new clusters. For each ≤ i < k + 1 /ρ + 1 , wedefine a set X i consisted of all vertices whose distance to A i is increased as a result of a deletion,but where this distance is still at most ˆ d . The sets X i can be computed while maintaining L ( v, A i ) .We can only maintain a single tree rooted at a dummy node s i .Note that a node v would join C ( w ) only after an increase in L ( v, A i +1 ) . Using this observation,after each deletion for every v ∈ X i +1 , z ∈ B i ( u ) \ B i ( v ) , and each edge ( u, v ) ∈ E we check if L ( z, u ) + w ( u, v ) < L ( v, A i +1 ) . If yes, then v joins C ( z ) , and v is pushed to a priority queue Q ( z ) .This priority Q ( z ) stores the distances in each tree T ( z ) rooted at z . These nodes join clusters T ( z ) ,but there may be other nodes that also need to join C ( z ) as a result of this change. Hence afterthis initial phase, for each z ∈ A i \ A i +1 where Q ( z ) = ∅ , we run the modified Dijkstra’s algorithm.Recall that in the modified Dijkstra’s algorithm when we explore neighbors of a node x , we onlyrelax an edge ( x, y ) if L ( x, y ) + w ( x, y ) < L ( x, A i ) . Then [RZ04] show that this process correctlymaintains the clusters. We then repeat this process for all the k + 1 /ρ + 1 iterations. A summaryof this algorithm for adding edges to the clusters is presented in Algorithm 2. The input to thisalgorithm the graph G , distance d and a set of edges E − deleted (or updated by distance increase).The update time analysis is more complicated, and is a consequence of the following lemma,which is a straightforward extension of a proof in [RZ04]. We include the full proof in Appendix B. Lemma 9. For every v ∈ V and ≤ i ≤ k + 1 /ρ + 1 , the expected total number of times the edgesincident on v are scanned over all trees for each w ∈ A i (i.e. trees on C ( w ) ) is O ( ˆ d/q i ) , where q i isthe sub-sampling probability. By combining the analysis of the modified Dijkstra algorithms of [TZ05], Theorem 5, and Lemma9, we can show that a d -restricted hopset with the following guarantees can be constructed: Theorem 10. Fix ǫ > , k ≥ and ρ ≤ . Given a graph G = ( V, E, w ) with integer andpolynomial weights, subject to edge deletions we can maintain a ( d, β, ǫ ) -hopset, with β = O (cid:0) ( ǫ · ( k + 1 /ρ )) k +1 /ρ +1 (cid:1) in O ( d ( m + n k − ) n ρ ) total time. The algorithm works correctly withhigh probability. Improved running time. The algorithm in Theorem 10 is too slow. Therefore in the rest of thissection we described how we can get improved running time using a hierarchical construction ofrestricted hopsets on a sequence of scaled graphs. As explained one idea is that we can add hopsetedges for smaller scales and use the added edges in computing distances for larger scales.We first state this insight more formally for a static hopset in the following lemma. However forutilizing this idea dynamically we need to combine it with other structural properties of our hopsets.12 lgorithm 2: Monotone d -restricted hopset. Adaptation of [RZ04]. Function UpdateClusters ( G, E − , E + , d, ǫ ) Add edges ( x, y ) ∈ E + to any tree T ( z ) s.t. ( x, y ) ∈ T ( z ) for i = 0 to k + 1 /ρ + 1 do C = ∅ . Remove edges E − from the ES tree maintaining distances L ( · , A i +1 ) Remove hopset edges ( z, v ) , and remove v from T ( z ) where L ( z, v ) ≥ L ( v, A i +1 ) X i +1 := set of nodes whose distances to A i +1 have increased due to removal of E − ,yet remained at most d (1 + ǫ ) for ∀ v ∈ X i +1 do for ( u, v ) ∈ E do for ∀ z ∈ B i ( u ) \ B i ( v ) do if L ( z, u ) + w ( u, v ) < L ( v, A i +1 ) then C = C ∪ { z } Relax ( ( Q ( z ) , u, v ) ) /* Update the estimate from z to v */ for ∀ z ∈ C do Dijkstra ( z ) return ( E − , E + ) Function Dijkstra ( z ) while Q ( z ) = ∅ do u = ExtractMin ( Q ( z )) B ( u ) = B ( u ) ∪ { z } for ∀ ( u, v ) ∈ E : z B ( v ) do if L ( z, u ) + w ( u, v ) < L ( v, A i +1 ) then Relax ( Q ( z ) , u, v ) /* Update the estimate from z to v */ Function Relax ( Q ( z ) , u, v ) /* Distances L ( z, v ) for each tree T ( z ) are maintained in Q ( z ) */ d ′ := L ( z, v ) + w ( z, v ) if d ′ ≤ d then if v ∈ Q ( z ) then decrease-key ( Q ( z ) , v, d ′ ) else if L ( z, u ) > d ′ then Insert ( Q ( z ) , v, d ′ ) Add node v to T ( z ) InsertEdge ( T ( z ) , ( z, v ) , d ′ ) /* As defined in Algorithm 1 */ E + = E + ∪ { ( z, v ) } Add ( z, v ) to E − if L ( z, v ) has increased. Lemma 11. Given a graph G = ( V, E ) , < ǫ < , the set of ( β, ǫ ) -hopsets H r , ≤ r < j foreach distance scale (2 r , r +1 ] , provides a (1 + ǫ ) -approximate distance for any pair x, y ∈ V , where d ( x, y ) ≤ j +1 using paths of length at most β + 1 .Proof. We can show this by an induction on j . Let π be the shortest path between x and y on .13hen π can be divided into two segments, where for each segment there is a (1 + ǫ ) -stretch pathusing edges in G ∪ S j − r =0 H r . Let [ x, z ] and [ z ′ , y ] be the segments on π each of which has length atmost j − . In other words, z is the furthest point from x on π that has distance at most j − , and z ′ is the next point on π . Then we have, d (2 β +1) G ∪ S j − r =1 H r ( x, y ) ≤ [ d ( β ) G ∪ S j − r =1 H r ( x, z ) + w ( z, z ′ ) + d ( β ) G ∪ S j − r =0 H r ( z ′ , y )] ≤ (1 + ǫ ) d G ( x, z ) + w ( z, z ′ ) + (1 + ǫ ) d G ( z ′ , y ) ≤ (1 + ǫ ) d G ( x, y ) This implies that it is enough to compute (2 β + 1) -hop limited distances in restricted hopsetsfor each scale. For using this idea in dynamic settings we have to deal with some technicalities. Weshould show that we can combine the rounding with the modification needed for handling insertions.We define a scaled graph using Lemma 7 as follows: G j := Scale ( G ∪ S jr =0 H r , j , ǫ , β + 1) .Here we set R = 2 j , ℓ = 2 β + 1 , and ǫ is a parameter that we tune later. We first describe theoperations performed on this scaled graph. We then explain how we can put things together for allscales to get the desired guarantees.The key insight for scaling G ∪ S jr =0 H r , j is that we can obtain H j +1 by computing an O ( ℓ ) -restricted hopset of G j (using the algorithm of Lemma 6) and scaling back the weights of the hopsetedges.In addition to the graph G undergoing deletions, our decremental algorithm maintains for each ≤ j ≤ log W :• The set ¯ H j = S jr =0 H r , union of all hopset edges for distance scales up to [2 j , j +1 ] .• The scaled graphs G , ..., G j .• Data structure obtained by constructing an O ( β/ǫ ) -restricted hopset on G j by running Al-gorithm 2 for the appropriate parameter ǫ < . We denote this data structure by D j .The data structure D j is maintained by running Algorithm 2 on G j , and maintaining the clustersand hence the bunches B ( v ) for all v ∈ V . Given D j , we can maintain H j +1 , where the edge weightsin clusters are assigned by computing approximate distances based on Algorithm 1 on each clusteras follows: In a tree rooted at a cluster center z , we set the weight w j on an edge ( z, v ) to be min j − r =1 η (2 r , ǫ ) L r ( z, v ) , where L r ( z, v ) is the level of v on G r after running the monotone ES treeup to depth D = ⌈ β +1 ǫ ⌉ . We the maintain a restricted hopset on the scaled graph G j , and by unscaling its weights we get H j +1 .Once each data structure D j is initialized with a graph, it can execute a single operation Update ( E − , E + ) , which updates the maintained graph by removing the edges of E − and addingedges E + by running Algorithm 2. The set E − is the set the edges corresponding to nodes leavingclusters. The operation returns a pair of edges ( E − , E + ) that are edges that should be removed oradded from D j . Additionally, by multiplying these distance by η (2 j , ǫ ) for the appropriate ǫ , wecan recover a pair ( H − , H + ) of edge sets, where H − is the set of edges that are removed from thehopset and H + is the set of edges added to the hopset as a result of the update. Note that a changein the weight of a hopset edge is equivalent to removing the edge and adding it with a new weight.In Algorithm 3 we update the data structures described as follows: we run Algorithm 2 fordistances bounded by d = ⌈ β +1 ǫ ⌉ starting on j = 0 , ..., log W in increasing order of j to computehopset edges H j . After processing all the changes in scaled graph G j , we add the inserted edges14o G j +1 . Then we process the changes in G j +1 by running the algorithm of Section 3.2 and repeatuntil all distance scales of covered. As explained, when the distances increase a node may join a newcluster which will lead to a set of insertions in H and in turn insertions in a sequence of graphs G j .We use an argument similar to Lemma 9 on each scaled graph to get the overall update time. In away we can see the added edges passed to each scale as a set of batch distance increases, betweenthe corresponding endpoints. This means we are not exactly in the setting of [RZ04] where onlyone deletion occurs at each time, but the exact same analysis as in Lemma 9 still holds. Algorithm 3: Updating the hopset after deleting an edge e . Input: < ǫ, < ǫ < , set d = β +1 ǫ . ( E − , E + ) := ( { e } , ∅ ) for j = 0 , . . . , ⌊ log W ⌋ do ( E − , E + ) := UpdateClusters ( G j , E − , d, ǫ ) /* Run Algorithm 2 on G j */ Update H j +1 by unscaling weights of E + and removing E − (Lemma 7) /* add edgesfor the next scale */ Update G j +1 based on Lemma 7 to reflect changes to H j +1 We summarized the algorithm obtained by maintaining this data structure over all scales inAlgorithm 3. Note that we need to update both the restricted hopsets D j on the scaled graphs andthe hopset H j obtained by scaling back the distances using Lemma 7. Running time (proof of Lemma 6). We can now put all the steps discussed to maintain thedata structure of Lemma 6. In particular, for obtaining a j +1 -restricted hopset, we maintain thedata structure of Lemma 6 on G j for each cluster rooted at a node z ∈ A i \ A i +1 and by setting ℓ = 2 β + 1 . By using Lemma 8 and Theorem 10 to compute d -restricted hopsets for d = O ( β/ǫ ) .When weights are polynomial we get the running time of ˜ O ( βǫ ( m + ∆) n ρ ) , where ∆ is the overallnumber of hopset edges added over all updates. In this section, we prove the stretch we prove the stretch incurred for a single-scale by combiningproperties of the monotone ES-tree algorithm of Section 3.1 with the static hopset argument andthe rounding framework. We will then show that by setting the appropriate parameters we canprove the stretch in Lemma 6.Next, we use the path doubling observation in Lemma 11 and properties of monotone ES treedescribed in Section 3.1 to prove the stretch incurred in each scale. We denote the stretch of ¯ H j tobe (1+ ǫ j ) . We extended the static hopset argument to dynamic settings in the following lemma. Forgetting the final stretch and hopbound we can set parameters ǫ = ǫ ′ (error incurred by rounding),and δ = ǫ k +1 /ρ +1) in the following lemma and get ǫ j = (1 + 3 ǫ ′ ) j . Theorem 12. Given a graph G = ( V, E ) , assume that we have maintained a (2 j , β, (1 + ǫ j )) -restricted hopset ¯ H j , and let H j +1 be the hopset obtained by running Algorithm 3 for any given ≤ ǫ < on G ∪ ¯ H j . Fix < δ ≤ k +1 /ρ +1) , and consider a pair x, y ∈ V where d t,G ( x, y ) ∈ [2 j , j +1 ] .Then for ≤ i ≤ k + 1 /ρ + 1 , either of the following conditions holds:1. d ((3 /δ ) i ) G ∪ ¯ H j +1 ( x, y ) ≤ (1 + 8 δi )(1 + ǫ j )(1 + ǫ ) d t,G ( x, y ) or, . There exists z ∈ A i +1 such that, d ((3 /δ ) i ) G ∪ ¯ H j +1 ( x, z ) ≤ ǫ j )(1 + ǫ ) d t,G ( x, y ) . Moreover, by running Algorithm 1 on G j +1 up to depth ⌈ β +1 ǫ ⌉ , and applying Lemma 7, we canmaintain (1 + ǫ j +1 ) -approximate single-source distances up to distance j +2 from a fixed source s on G , where ǫ j +1 = (1 + ǫ j )(1 + ǫ ) (1 + ǫ ) and β = (3 /δ ) k +1 /ρ +1 .Proof. We use a double induction on i and time t . First, we prove a claim that shows using theproperties of the monotone ES tree and the scaling, when we add an edge to ¯ H j +1 it has the desiredstretch. This claim captures one of the main differences with a static argument (see Appendix A).Let L t,j ( u, v ) denote the level of node v in the tree rooted at u after running Algorithm 2 up todepth D = ⌈ β +1 ǫ ⌉ on graph G j . We are assuming by lemma conditions that we are given ¯ H j andhave maintained the clusters in G j . But to complete our argument, we will later show how giventhe hopsets of scale [2 j , j +1 ] , we can compute SSSP distance for the next scale. Claim 2. Let v ∈ B ( u ) (or u ∈ C ( v ) ), such that d t ( u, v ) ≤ j +1 . Then if the edge ( u, v ) isadded to H j +1 after running on Algorithm 2 on G , ..., G j for D = ⌈ β +1 ǫ ⌉ . Let w j +1 ( u, v ) :=min jr =1 η (2 r , ǫ ) L r ( u, v ) . Then we have, d t,G ( u, v ) ≤ w j +1 ( u, v ) ≤ (1 + ǫ j )(1 + ǫ ) d t,G ( u, v ) .Proof. We use an induction on time for this claim. At time t = 0 , the claim follows by the static argu-ment Lemma 11. After running Algorithm 2, the weight assigned to ( v, u ) is min jr =1 η (2 j , ǫ ) L t,j ( v, u ) .First case is that node u is stretched in the monotone ES tree rooted at v on G j . Then we have L t,j ( v, u ) = L t − ,j ( v, u ) . In this case, the claim follows based on induction on time, since we havemaintained η (2 j , ǫ ) L t − ,j ( v, u ) ≤ (1+ ǫ )(1+ ǫ j ) d t − ,G ( v, u ) ≤ (1+ ǫ )(1+ ǫ j ) d t,G ( v, u ) . The secondcase is when node u is not stretched on G j . Let d ( v, u ) ∈ [2 r , r +1 ] , where r ≤ j . By path doublingof Lemma 11 we know that there exists a path with β + 1 hops between v and u on G ∪ ¯ H r withstretch (1 + ǫ r ) . We know from Lemma 7 that this path corresponds to a path with depth at most ⌈ β +1 ǫ ⌉ on G r , and after scaling back it has length at most (1 + ǫ j )(1 + ǫ ) d G ( v, u ) on G . In otherwords we have, w j +1 ( v, u ) = j min r =1 η (2 r , ǫ ) L t,r ( v, u ) ≤ (1 + ǫ ) d (2 β +1) t,G ∪ ¯ H r ( v, u ) ≤ (1 + ǫ r )(1 + ǫ ) d t,G ( v, u ) ≤ (1 + ǫ j )(1 + ǫ ) d t,G ( v, u ) Also, note that we never underestimate any distances. The rounding in Lemma 7 does not underes-timate the distances, and if the edge is stretched that means we are assigning a weight larger thanis obtained by the rounding.This claim implies that the weights of hopset edges assigned by the algorithm correspond toapproximate distance of their endpoints. Let d t,j ( x, y ) := min jr =1 η (2 r , ǫ ) L t,j ( x, y ) which would bethe estimate we obtain by for distance between x and y after scaling back distances on G j . in otherwords this is the hop-bounded distance after running monotone ES tree on G j and scaling up theweights.For any time t and the base case of i = 0 , we have three cases. If y ∈ B ( x ) then edge ( x, y ) is inthe hopset H j +1 , and by Claim 2 the weight assigned to this edge is at most (1+ ǫ j )(1+ ǫ ) d t,G ( x, y ) .In this case the first condition of the theorem holds. Otherwise if x ∈ A , then z = x trivially satisfiesthe second condition. Otherwise we have x ∈ A /A , and by setting z = p ( x ) we know that there16s an edge ( x, z ) ∈ ¯ H j such that d t,j ( x, z ) ≤ (1 + ǫ ) d G ∪ ¯ H j ( x, y ) (by definition of p ( x ) and using thesame argument as above). Hence the second condition holds.By inductive hypothesis assume the claim holds for i . Consider the shortest path π ( x, y ) between x and y . We divide this path into /δ segments of length at most δd t,G ( x, y ) and denote the a -thsegment by [ u a , v a ] , where u a is the node closest to x (first node of distance at least aδd t,G ( x, y ) )and v a is the node furthest to x on this segment (of distance at most ( a + 1) δd t,G ( x, y ) ).We can then use the induction hypothesis on each segment. First consider the case where forall the segments the first condition holds for i , then there is a path of (3 /δ ) i (1 /δ ) ≤ (3 /δ ) i +1 hopsconsisted of the hopbounded path on each segment. We can show that this path satisfies the firstcondition for i + 1 . In other words, d ((3 /δ ) i +1 ) t,G ∪ ¯ H j +1 ( x, y ) ≤ /δ X a =1 d ((3 /δ ) i ) t,G ∪ ¯ H j +1 ( u a , v a ) + d (1) t,G ( v a , u a +1 ) ≤ (1 + 8 δi )(1 + ǫ j )(1 + ǫ ) d t,G ( x, y ) Next, assume that there are at least two segments for which the first condition does not holdfor i . Otherwise, if there is only one such segment a similar but simpler argument can be used. Let [ u l , v l ] be the first such segment (i.e. the segment closest to x , where u l is the first and v l is the lastnode on the segment, and let [ u r , v r ] be the last such segment.First by inductive hypothesis and since we are in the case that the second condition holds forsegments [ u l , z l ] and [ u r , v r ] , we have,• d ((3 /δ ) i ) G ∪ ¯ H j +1 ( u l , z l ) ≤ ǫ )(1 + ǫ j ) d t,G ( u l , v l ) , and, • d ((3 /δ ) i ) t,G ∪ ¯ H j +1 ( v r , z r ) ≤ ǫ )(1 + ǫ j ) d t,G ( u r , v r ) Again, we consider two cases. First, in case z r ∈ B ( z l ) (or z l ∈ C ( z r ) ), we have added a singlehopset edge ( z r , z l ) ∈ ¯ H j +1 . Note that d t,G ( z r , z l ) ≤ j +1 , since d t,G ( z r , z l ) ≤ d t,G ( x, y ) ≤ j +1 .Hence by Claim 2 the weight we assign to ( z r , z l ) is at most (1 + ǫ )(1 + ǫ j ) d t,G ( z r , z l ) .On the other hand, by triangle inequality, and above inequalities (based on induction hypothesis)we have, d (1)¯ H j +1 ( z l , z r ) ≤ (1 + ǫ )(1 + ǫ j ) d G ( z l , z r ) (1) ≤ (1 + ǫ )(1 + ǫ j )[ d ((3 /δ ) i ) G ∪ ¯ H j +1 ( u l , z l ) + d G ( u l , v r ) + d ((3 /δ ) i ) G ∪ ¯ H j +1 ( z r , v r )] (2)By applying the inductive hypothesis on segments before [ u l , v l ] , and after [ u r , v r ] , we have apath with at most (3 /δ ) i for each of these segments, satisfying the first condition for the endpointsof the segment. Also, we have a /δ ) i + 1 -hop path going through u l , z l , z r , v r that satisfies thefirst condition for u l , v r .Putting all of these together, we will show that there is a path of hopbound (3 /δ ) i +1 satisfying17he first condition. In particular, we have (we drop the subscript t here), d (3 /δ ) ( i +1 ) G ∪ ¯ H j +1 ( x, y ) ≤ l − X a =1 [ d ((3 /δ ) i ) G ∪ ¯ H j +1 ( u a , v a ) + d (1) G ( v a , u a +1 )] + d ((3 /δ ) i ) G ∪ ¯ H j +1 ( u l , z l ) (3) + d (1)¯ H j +1 ( z l , z r ) + d ((3 /δ ) i ) G ∪ ¯ H j +1 ( z r , v r ) + d (1) G ( v r , u r +1 ) (4) + /δ X a = r +1 [ d ((3 /δ ) i ) G ∪ ¯ H j +1 ( u a , v a ) + d (1) G ( v a , u a +1 )] (5) ≤ (1 + 8 δi )(1 + ǫ j )(1 + ǫ )[ d G ( x, u l ) + d G ( v r , y )] + d G ( u l , v r ) (6) + (1 + ǫ )(1 + ǫ j )[2 d G ( u l , z l ) + 2 d G ( z r , v r )] (7) ≤ (1 + ǫ )(1 + ǫ j )[8 δd G ( x, y ) + (1 + 8 δi ) d G ( x, y )] (8) ≤ (1 + 8 δ ( i + 1))(1 + ǫ )(1 + ǫ j ) d G ( x, y ) (9)In the first inequality we used the induction on i for each segment, and triangle inequality.In the second inequality we are using the fact that nodes u j , v j for all j are on the shortest pathbetween x and y in G , and we are replacing d (1)¯ H j +1 ( z l , z r ) with inequality 2. In line 8 we used thefact that the length of each segment is at most δ · d G ( x, y ) . Hence we have shown that the firstcondition in the lemma statement holds.Finally, consider the case where z r B ( z l ) . If z l A i +2 , we consider z = p ( z l ) , where z l ∈ A i +2 .We now claim that this choice of z satisfies the second lemma condition.We have added the edge ( z l , z ) to the hopset. Since z r B ( z l ) , we have d t − ,G ( z l , p ( z l )) ≤ d t − ,G ( z l , z r ) ≤ d t,G ( x, y ) ≤ j +1 . Therefore we can use Claim 2 on the edge ( z l , p ( z l )) . d (3 /δ ) ( i +1 ) G ∪ ¯ H j +1 ( x, y ) ≤ l − X a =1 [ d ((3 /δ ) i ) G ∪ ¯ H j +1 ( u a , v a ) + d (1) G ( v a , u a +1 )] + d ((3 /δ ) i ) G ∪ ¯ H j +1 ( u l , z l ) + (1 + ǫ )(1 + ǫ j ) d (1)¯ H j +1 ( z l , z ) (10) ≤ (1 + 8 δi )(1 + ǫ )(1 + ǫ j ) d G ( x, u l ) + d ((3 /δ ) i ) G ∪ ¯ H j +1 ( u l , z l ) + (1 + ǫ )(1 + ǫ j ) d ¯ H j +1 ( z l , z r ) (11) ≤ (1 + 8 δi )(1 + ǫ )(1 + ǫ j ) d G ( x, u l ) + d ((3 /δ ) i ) G ∪ ¯ H j +1 ( u l , z l ) (12) + (1 + ǫ )(1 + ǫ j )[2 d ((3 /δ ) i ) G ∪ ¯ H j +1 ( z l , u l ) + d G ( u l , v r ) + d (3 /δ ) i G ∪ ¯ H j +1 ( v r , z r )] (13) ≤ (1 + 8 δi )(1 + ǫ )(1 + ǫ j ) d ((3 /δ ) i ) G ∪ ¯ H j +1 ( x, v r ) + 6 δ (1 + ǫ j ) d G ( x, y ) (14) ≤ ǫ )(1 + ǫ j ) d G ( x, y ) (15)In the last inequality we used the fact that we set δ < k +1 /ρ +1) and thus δi < . The onlyremaining case is when z ℓ ∈ A i +2 , in which case a similar reasoning follows by setting z = z l .Finally, we prove that after adding hopset edges H j +1 we can maintain approximate single-sourceshortest path distances from a given source s .We run the monotone ES tree of Algorithm 1 on G j +1 up to depth ⌈ β +1 ǫ ⌉ on all of thescaled graphs G , ..., G j +1 . We let our distance estimate d t,j +1 ( s, v ) be min r η (2 r , ǫ ) L t,r ( s, v ) where L t,r ( s, v ) is the level of v on G r on the ES tree up to depth ⌈ β +1 ǫ ⌉ rooted at s . Note that by runningAlgorithm 2 we are also maintaining the same distances on each cluster while also maintaining thenodes that leave and join a cluster. Let v ∈ V such that d t,G ( s, v ) ≤ j +1 , and let L t,j +1 ( s, v ) bethe level of v in the monotne ES tree of G j +1 maintained up to depth ⌈ β +1 ǫ ⌉ .18or any v ∈ V such that d G ( s, v ) ≤ j +2 , we want to show, d t,j +1 ( s, v ) := η (2 j +1 , ǫ ) L t,j +1 ( s, v ) ≤ (1 + ǫ j +1 ) d t,G ( s, v ) To show this we follow a similar structure as the one we used for ¯ H j +1 . We first argue thatfor any pair of nodes x , y where d t,G ( x , y ) ≤ j +1 , in the monotone ES tree on G j +1 we canmaintain the levels such that for each ≤ i ≤ /ρ + k + 1 one of the following conditions holds:1. There is a path π ( x , y ) of depth at most ⌈ β +1 ǫ ⌉ on G j +1 , where w G ∪ ¯ H j +1 ( π ( x , y )) ≤ η (2 j , ǫ ) w G j +1 ( π ( x , y )) ≤ (1 + 8 δi )(1 + ǫ j )(1 + ǫ ) d t,G ( x , y ) 2. There exists z ∈ A i +1 , such that d t,j +1 ( x , z ) ≤ η (2 j +1 , ǫ ) L ( x , z ) ≤ ǫ j )(1 + ǫ ) d t,G ( x , y ) .Then we can use this to show that there exists a path π ′ of depth at most ⌈ β +1 ǫ ⌉ on G j +1 between s and v with stretch (1 + ǫ j ) .To show this we consider a similar case by case argument similar to the above. First assumethat we have s ∈ B i ( v ) ( v ∈ C ( s ) ) for some iteration ≤ i ≤ /ρ + 1 + k , and we have added ahopset edge with weight w j +1 to ¯ H j +1 .This case is similar to Claim 2 but now we have d ( s, v ) ≤ j +2 . In this case the edge ( s, v ) wasadded to H j +1 . If edge ( s, v ) is stretched then we set L t,G ( s, v ) = L t − ,G ( s, v ) , and by induction ontime we have η (2 j +2 , ǫ ) L t,j +1 ( s, v ) = η (2 j +2 , ǫ ) L t − ,j +1 ( s, v ) ≤ (1 + ǫ j +1 ) d t,G ( s, v ) . If this edgeis not stretched then by Lemma 7 after scaling we get distance at most (1 + ǫ j )(1 + ǫ ) d t,G ( s, v ) ,where the additional factor of (1 + ǫ ) is due to scaling of G ∪ ¯ H j +1 .Now consider the case s B ( v ) . After all the iterations ≤ i ≤ /ρ + 1 + k , the secondtheorem condition cannot hold (since A /ρ +1+ k = ∅ ), the first condition must hold, which statesthat there is a path with β = (3 /δ ) k +1 /ρ +1 -hops and stretch (1 + δi )(1 + ǫ )(1 + ǫ ) d t,G ( s, v ) ≤ (1 + ǫ )(1 + ǫ )(1 + ǫ j ) d t,G ( s, v ) in G ∪ ¯ H j +1 between s and v . Also by path doubling of Lemma 11 weargued that this also means that there is a path with β + 1 hops and (1 + ǫ j )(1 + ǫ )(1 + ǫ ) -stretchin G ∪ ¯ H j +1 between s and v that is consisted of two paths satisfying the first theorem conditionfor H j +1 . Let let this be the path π ′ , and show that it satisfies the two conditions described. Firstassume that no edge on this path is stretched. Then the stretch argument for L ( s, v ) clearly holdsbased on the earlier arguments and Lemma 7. Now let us argue about the possible insertions onthis path. Note that by our construction, and in all cases we considered in our hopset argument,an edge ( x ′ , y ′ ) was inserted into H j +1 only when v ∈ B i ( u ) for some ≤ i ≤ k + 1 /ρ + 1 , andthe weights were assigned based on Claim 2. Using these weights, we will prove another claim thatallows us to reason about possible insertions on π ′ : Claim 3. Let ( x ′ , y ′ ) be an edge added to H j +1 and hence G j +1 with weight w G j +1 ( x ′ , y ′ ) due to thefact that x ′ ∈ B ( y ′ ) (or y ′ ∈ C ( x ′ ) ). Then either of the following holds for the level of node y ′ inthe monotone ES tree rooted at s : • L t,j +1 ( s, y ′ ) = L t − ,j +1 ( s, y ′ ) and thus η (2 j +1 , ǫ ) L t,j +1 ( s, y ′ ) ≤ η (2 j +1 , ǫ ) L t − ,j +1 ( s, y ′ ) ≤ (1 + ǫ j +1 ) d t,G ( s, y ′ ) ; or, • We have L t,j +1 ( s, y ′ ) ≤ L t,j +1 ( s, x ′ ) + w G j +1 ( x ′ , y ′ ) .Proof. The first case is when the edge ( x ′ , y ′ ) is stretched in the tree rooted at s on G j +1 . Notethat this is different from the setting in Claim 2, where we were reasoning about the node y ′ beingstretched in the tree rooted at x ′ on G j . In this case we set L t,j +1 ( s, y ′ ) = L t − ,j +1 ( s, y ′ ) . Since we19ave maintained distances up to depth ⌈ β +1 ǫ ⌉ on G j +1 with stretch (1 + ǫ j ) at time t − , and sincewe are in the decremental setting this means that after scaling back we get the desired stretch.The second case is when the edge ( x ′ , y ′ ) is not stretched in the tree rooted at s . The claimfollows by definition of an edge that is not stretched.Here we only relied on the weight w G j +1 on H j +1 obtained from Claim 2 after scaling to themetric in G j +1 . Note that if d t,G ( x ′ , y ′ ) belonged to a smaller scale, we have already added an edgethat satisfied a similar condition for the corresponding scale. Going back to the hopset argument,we note that every insertion on path π ′ satisfied the conditions in Claim 3. This lets us show thatafter the scaling we can obtain the desired stretch in which we lose another factor of (1 + ǫ ) .Therefore path π ′ consists of only such edges (in any of the smaller scales) or the edges in G .Using this fact we showed that path π ′ has stretch (1 + ǫ j )(1 + ǫ )(1 + ǫ ) in G ∪ ¯ H j +1 . Hence afterscaling and running Algorithm 1 on G j +1 , we know that path π ′ has depth ⌈ β +1 ǫ ⌉ and we have thefollowing estimate for v : d t,j +1 ( s, v ) ≤ j +1 min r =1 η (2 j +1 , ǫ ) L t,r ( s, v ) ≤ (1 + ǫ j )(1 + ǫ ) (1 + ǫ ) d t,G ( s, v ) To show that the estimates returned have the two conditions described more formally we can usean induction on i very similar to the earlier argument proving the stretch in G ∪ ¯ H j +1 applied totwo segments of length at most j +1 . The only difference in the calculations come from insertionsfor which we can apply Claim 2 to show the the estimate only loses another factor of (1 + ǫ ) . Thebase case is the case s ∈ B ( v ) that we discussed. Then we consider the shortest path between s and v , and first divide it into two segments π and π each with length at most j +1 as in Lemma21. Then divide each one of π and π into segments and consider the case by case analysis asbefore. We divide the path into segments of length δd t,G ( s, v ) . Assume that for all the segmentsthe first theorem condition holds. Let [ u ′ , v ′ ] be one such segment. Then we know by the inductionhypothesis that there is a path with (3 /δ ) i hops on G ∪ ¯ H j with stretch (1 + ǫ j )(1 + ǫ )(1 + ǫ ) . Thissame path will also be in G j +1 . If there are segments for which the first condition does not hold wewill consider find z ′ l and z ′ r , such that either z ′ r ∈ B ( z ′ l ) or there is another node z ′ for which thesecond condition holds and z ′ ∈ B ( z ′ l ) . In any of these cases, we can use the argument in Claim 3for each one of the inserted edges in π ′ . At a high level we are arguing that each of these insertscorrectly shortcuts the segments covered by their corresponding cluster in H j +1 .Finally, using the same analysis as before, we have hopset paths that approximate π and π in G ∪ ¯ H j +1 each with β hops. The concatenation of this same paths in G j +1 approximate π ′ andafter scaling and unscaling we will have an additional factor of (1 + ǫ ) .Theorem 12 allows us to hierarchically use the restricted hopsets for smaller scales to computethe distance for larger scales, that is in turn used to update the hopset edges in the larger scales.In the following lemma, we will show that by setting δ = O ( ǫ ( k +1 /ρ +1) ) we get the desired stretchfor Lemma 6. Next, we use Lemma 6 for all scales and by setting the appropriate error parameterswe can prove our overall stretch and hopbound tradeoffs. We also prove the overall update timeusing the running time of the monotone ES tree algorithm to run the restricted hopset algorithmon the hopsets obtained for each scale. Single scale stretch. We will now use the stretch argument in Section 3.3 to get the hopboundand stretch for each scale by setting the appropriate parameters. As discussed, there are two errorfactors incurred in each scale. One is caused by the fact that we are using previously added hopset20dges, which we denoted by (1 + ǫ j ) for scale j , and another is caused due to the rounding error,which we denote by (1 + ǫ ) . To get an overall stretch of (1 + ǫ ) , we will set ǫ ′ = ǫ W and ǫ = ǫ ′ . Corollary 13. After each update t , and for all j, ≤ j ≤ log W and any pair x, y ∈ V , where j ≤ d t,G ( x, y ) ≤ j +1 , we have d t,G ( x, y ) ≤ d t,G ∪ ¯ H j ( x, y ) ≤ (1 + 3 ǫ ′ ) j · d ( β ) t,G ( x, y ) .Proof. We use an induction on j . The base case ( j = 0 ) is satisfied by the paths in G , since wecan assume with out loss of generality that the edge weights are at least one. First, by inductionhypothesis, we have a (2 j , β, (1 + 3 ǫ ′ ) j ) -hopset, and hence ǫ j = (1 + 3 ǫ ′ ) j .We then use Theorem 12, for ǫ = ǫ ′ , and δ = ǫ ′ k +1 /ρ +1) . For the final iteration i = k +1 /ρ +1 since A i +1 = ∅ , the second item can not hold. Hence the first item should hold, and since δi < ǫ ′ we have, d ( β ) t,G ∪ ¯ H j ( x, y ) ≤ (1 + 3 ǫ ′ ) j − (1 + ǫ ′ )(1 + ǫ ′ ) d t,G ( x, y ) ≤ (1 + 3 ǫ ′ ) j d t,G ( x, y ) . Here d t,j ( x, y ) is the sum of weights in the monotone ES tree, which corresponds to the approx-imate β -limited distance of x and y on the scaled graph. Proof of stretch and hopbound in Lemma 6. Now by simply setting ǫ ′ = ǫ in Corollary 13we get the desired stretch and hopbound. Putting it together. We now use the stretch argument of Corollary 13 with the update timefollowed by Lemma 6 to get the following hopset guarantees. Theorem 14. The total update time in each scaled graph G j , ≤ j ≤ log W , over all deletionsis ˜ O (( ℓ/ǫ ′ )( n ν + m ) n ρ ) , and hence the total update time for maintaining ( β, ǫ ) -hopset withhopbound β = O ( log nǫ · ( k + 1 /ρ )) k +1 /ρ +1 is ˜ O ( βǫ · mn ρ ) .Proof. First we use Corollary 13 to prove the stretch and hopbound, by setting j = log W . For thefinal scale we have d t, log W ( u, v ) = (1 + 3 ǫ ′ ) log W d G ( u, v ) ≤ (1 + ǫ ) log W . The hopbound obtained is O ( 1 ǫ ′ · ( k + 1 /ρ )) k +1 /ρ +1 = O ( log nǫ · ( k + 1 /ρ )) k +1 /ρ +1 . The running time follows by Lemma 6 where ∆ = O ( n ν ) , we get an overall running time of ˜ O ( mn ρ · βǫ ) time.Hence the total update ˜ O ( mn ρ ) and the hopbound β is polylogarithmic when ρ and k areconstant. In this section we explain two applications of our decremental hopsets to get improved bounds for (1 + ǫ ) -approximate SSSP and MSSP and (2 k − ǫ ) -APSP. For both of these problems we firstconstruct a hopset, where we choose the appropriate hopbound depending on the number of source.We then use the scaling scheme in Lemma 7 on the obtained graph.Our algorithm for (2 k − ǫ ) -APSP involves maintaining two data structures simultaneously:A ( β, ǫ ) -hopset, and a Thorup-Zwick distance oracle [TZ05]. At a high-level the hopset will letus maintain the distance oracle much faster, at the expense of a (1 + ǫ ) -factor loss in the stretch. If weights are not polynomial the log n factor will be replace with log W , and a factor of log W will be added tothe update time. .1 (1 + ǫ ) -approximate SSSP and (1 + ǫ ) -MSSP Given a graph G = ( V, E ) and a set S of size of sources, our goal is to maintain the distance fromeach source in ˜ O ( sm + mn ρ ) , total update time (where ρ is a constant), and constant query time.Once a ( β, ǫ ) -hopset is constructed, we can run Algorithm 1 on all the scaled graphs G , G , ..., G log W up to depth O ( β ) , scale back the distances, and return the smallest value to each source. Similarapproaches for h -limited SSSP have also been used in previous work such as [Ber09,HKN14a,KL20].In the next theorems we argue that using the same techniques as we used for maintaining thehopset (that are similar to framework of [HKN16]), namely by combining monotone ES tree andscaling, we get our SSSP and MSSP results. In particular after constructing the hopset we can useTheorem 12 and Theorem 14 to get, Theorem 15. Given an undirected and weighted graph G = ( V, E ) , there is a decremental algo-rithm for maintaining (1 + ǫ ) -approximate distances from a set S of sources in total update time of ˜ O ( β ( | S | ( m + n k − ) + mn ρ )) , where β = O ( log Wǫ · ( k + 1 /ρ ) k +1 /ρ +1 ) , and with O (1) query time.Proof. We maintain a ( β, ǫ ) -hopset H based on Theorem 14. Then we run Algorithm 1 on G ∪ H from all the s for all scaled graphs. The the claim follows by the argument in Theorem 12. Inparticular, after adding all the hopset edges at time t for all scales, we will run the monotone EStree algorithm rooted at each source again on the union of all scaled graphs G ∪ ... ∪ G log W (bysetting ǫ = ǫ/ ) and let the level L ( s, v ) of a node be min j η (2 j , ǫ ) L j ( s, v ) where L j ( s, v ) is thelevel of v on G j after running the monotone ES tree that is run up to depth β . By item 3 of Theorem14, we get an overall stretch of (1 + ǫ/ ≤ (1 + ǫ ) .The time required for maintaining the hopset is ˜ O (( m + n ν ) n ρ ) and by setting n ρ = s thetime required for maintaining h -SSSP from all sources is O ( sm · β ) = ˜ O ( sm ) , when s = n Ω(1) .We next state two specific consequences. First implication is that when the number of sourcesis a polynomial, and the graph is not very sparse, we can get a near-optimal (up to polylogarithmicfactors) algorithm for (1 + ǫ ) -MSSP. Corollary 16. Given an undirected and weighted graph G = ( V, E ) , where | E | = n , there is adecremental algorithm for maintaining (1+ ǫ ) -approximate distances from s sources, where s = n Ω(1) in total update time of ˜ O ( sm ) , and with O (1) query time. When the number of sources s = n o (1) (e.g. in case of SSSP), the best tradeoff can be obtain bysetting ρ = log log n √ log n . We will then have β = 2 ˜ O ( √ log n ) and also n ρ = 2 ˜ O ( √ log n ) . In this case we getimproved bounds over the result of [HKN14a], which has a total update time of is mn ˜ O (log / n ) . Corollary 17. Given an undirected and weighted graph G = ( V, E ) , there is a decremental algorithmfor maintaining (1 + ǫ ) -approximate distances from s sources, when < ǫ < is a constant and | E | = n · ˜Ω( √ log n ) , with total update time of ˜ O ( sm · ˜ O ( √ log n ) ) , and with O (1) query time. Hence,we can maintain (1 + ǫ ) -approximate SSSP in ˜ O ( √ log n ) amortized time. It is known that in static settings for any weighted graph G = ( V, E ) , we can construct a Thorup-Zwick [TZ05] distance oracle of size (w.h.p.) ˜ O ( n /k ) , such that after the preprocessing time of ˜ O ( mn /k ) , we can query (2 k − -approximate distances for any pair of nodes in O ( k ) time. In thissection we show that in decremental settings we can maintain these distance oracles in total updatetime of ˜ O ( mn /k ) (for graphs that are not too sparse), and we can query (2 k − ǫ ) -approximate22istances in O ( k ) time. This can be done by maintaining a ( β, ǫ ) -hopset and a distance oraclesfor G at the same time, where β is polylogarithmic in n . Intuitively, the hopset will allow us toupdate distances faster on the distance oracles. Distance oracle algorithm via a hopset. Assume that we are given a ( β, ǫ ) -hopset for G . The algorithm for constructing the Thorup-Zwick distance oracle is as follows: Similar to thealgorithm in Section 2.1, we define sets V = A ⊇ A ⊇ ... ⊇ A k = ∅ . But here each set A i +1 isobtained by sampling each element from A i with probability p i = n − /k . Same as before, for everynode u ∈ A i \ A i +1 , let p i ( u ) ∈ A i +1 be the closest node to set A i +1 . We the bunch of a node u is the set B ( u ) = ∪ ki =1 B i ( u ) = { v ∈ A i : d ( u, v ) < d ( u, A i +1 ) } ∪ { p ( u ) } , C ( v ) called the clusterof v such that if v ∈ B ( u ) then u ∈ C ( v ) . The distance oracle is consisted of bunches B ( v ) forall v ∈ V , and the distances associated with them. Note that the information stored here are alsodifferent from the hopset algorithm described in Section 2.1, since there we only added edges fornodes v ∈ A i and their bunches. Thorup and Zwick [TZ05] show that this distance oracle has thefollowing properties (in static settings): Theorem 18 ( [TZ05]) . There is a distance oracle of expected size O ( kn /k ) , that can answerqueries (2 k − -approximate distance queries for a given weighted and undirected graph G = ( V, E ) in O ( k ) time for any k ≥ . The preprocessing time in static settings is w.h.p. ˜ O ( mn /k ) . As discussed in Section 3.2, Roditty and Zwick [RZ04] showed how to maintain this data struc-ture in O ( mn ) update time for unweighted graphs , but where the size is increased to ˜ O ( m + n /k ) .For weighted graphs their updates time can be as large as O ( mn /k ) . We will argue that bymaintaining a ( β, ǫ ) -hopset along with the distance oracle we can improve the total update timeto ˜ O ( βmn /k ) . This combined with our hopset construction in Section 3.2 will lead to the desiredbounds. More formally, Theorem 19. Given a weighted and undirected graph G = ( V, E ) and a ( β, ǫ ) -hopset H for G ,and a parameter k ≥ , we can maintain a distance oracle of size ˜ O ( m + n /k ) and with stretch (1 + ǫ )(2 k − in ˜ O ( βǫ · mn /k ) total update time.Proof. We will again use the scaling idea described in Section 3.2. Similar to Theorem 15, weconsider the sequence G , ..., G j , where G r , r ≤ j is scaling of the graph G ∪ ¯ H r as defined inSection 3.2, where ǫ = ǫ and ¯ H j is a (2 j , β, ǫ ) hopset. We then run the algorithm of Roditty-Zwick [RZ04] on G up to depth ⌈ β/ǫ ⌉ for maintaining the clusters and the bunches. The algorithmand the running time analysis is similar to the restricted hopset algorithm described in Section3.2. The main differences in the algorithm are the sampling probabilities and the informationstored. Therefore using the argument in Lemma 9 we can show that by running this algorithmon G with depth ⌈ β/ǫ ⌉ we can maintain a bunch B i ( u ) for all nodes u ∈ V, ≤ i ≤ k − in ˜ O ( ℓmǫq i ) = ˜ O ( βǫ mn /k ) total update time. This algorithm lets us maintain clusters. We also maintainthe distances in clusters and hence bunches as follows: For each v ∈ V, u ∈ B ( v ) , we run single-source shortest path distance between from v on scaled graphs G , ..., G log W (by setting ǫ = ǫ/ ).We then set the distance d ( u, v ) to be min j η (2 j , ǫ ) L j ( s, v ) where L j ( s, v ) is the level of v on G j after running the monotone ES tree that is run up to depth ⌈ β + 1) /ǫ ) ⌉ .Again, when we combine the hopset stretch with the stretch with the rounding algorithm causedby rounding, we get an overall stretch of (1+ ǫ ) ≤ ǫ . The overall stretch is thus (2 k − ǫ ) . Theorem 20. Given a weighted graph G = ( V, E ) with polynomial weights, and constant k ≥ This k should not be confused with the size parameter in the hopset algorithm of Section 2.1. Here we only usethe fact that the hopset size can be bounded based on the graph density. If k = ω (1) , then a factor of n o (1) will be added to the running time. nd < ǫ < , we can maintain a data structure with expected size ˜ O ( m + n /k ) and total updatetime of ˜ O ( mn /k · (1 /ǫ ) O (1) ) , that returns (2 k − ǫ ) -stretch queries for any pair u, v ∈ V with O ( k ) query time.Proof. We construct and maintain a ( β, ǫ ) -hopset using Theorem 14. If m = n we canset ρ = k , and we set the hopset size parameter ν to a small constant such that O ( n ν ) = O ( m ) .If m = n o (1) , we set ρ = k − ν , where a ν < /k is a constant. In both cases time requiredfor maintaining a hopset is ˜ O ( mn /k · (1 /ǫ ) O (1) ) . We get hopbound β = O (log n/ǫ ) log(1 /ν )+1 /k +1 = polylog ( n ) . Hence we can also maintain the distance oracle in ˜ O ( mn /k ) total update time. Thestretch will be (2 k − ǫ ) , and the query time remains the same as the static query time, whichis O ( k ) . References [ABP18] Amir Abboud, Greg Bodwin, and Seth Pettie. A hierarchy of lower bounds for sublinearadditive spanners. SIAM Journal on Computing , 47(6):2203–2236, 2018.[Ber09] Aaron Bernstein. Fully dynamic (2+ ε ) approximate all-pairs shortest paths with fastquery and close to linear update time. In , pages 693–702. IEEE, 2009.[BLP20] Uri Ben-Levy and Merav Parter. New ( α , β ) spanners and hopsets. In Proceedings of theFourteenth Annual ACM-SIAM Symposium on Discrete Algorithms , pages 1695–1714.SIAM, 2020.[BR11] Aaron Bernstein and Liam Roditty. Improved dynamic algorithms for maintainingapproximate shortest paths under deletions. In Proceedings of the twenty-second annualACM-SIAM symposium on Discrete Algorithms , pages 1355–1365. SIAM, 2011.[Che18] Shiri Chechik. Near-optimal approximate decremental all pairs shortest paths. In , pages170–181. IEEE, 2018.[Coh00] Edith Cohen. Polylog-time and near-linear work approximation scheme for undirectedshortest paths. Journal of the ACM (JACM) , 2000.[DN19] Michael Dinitz and Yasamin Nazari. Massively parallel approximate distance sketches. OPODIS , 2019.[EGN19] Michael Elkin, Yuval Gitlitz, and Ofer Neiman. Almost shortest paths and pram distanceoracles in weighted graphs. arXiv preprint arXiv:1907.11422 , 2019.[EN18] Michael Elkin and Ofer Neiman. Near-optimal distributed routing with low memory.In Proceedings of the ACM Symposium on Principles of Distributed Computing . ACM,2018.[EN19a] Michael Elkin and Ofer Neiman. Hopsets with constant hopbound, and applications toapproximate shortest paths. SIAM Journal on Computing , 2019. The choice of size parameter impact the polylogarithmic factors. Hence one option is to choose the smallestconstant such that the graph size is not smaller than the hopset size. The 31st ACM Symposium on Parallelism in Algorithmsand Architectures , pages 333–341, 2019.[EN20] Michael Elkin and Ofer Neiman. Near-additive spanners and near-exact hopsets, aunified view. arXiv preprint arXiv:2001.07477 , 2020.[HKN14a] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Decremental single-source shortest paths on undirected graphs in near-linear total update time. In , pages 146–155.IEEE, 2014.[HKN14b] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. A subquadratic-time algorithm for decremental single-source shortest paths. In Chandra Chekuri, ed-itor, Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Al-gorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014 , pages 1053–1072.SIAM, 2014.[HKN16] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Dynamic approxi-mate all-pairs shortest paths: Breaking the o(mn) barrier and derandomization. SIAMJournal on Computing , 45(3):947–1006, 2016.[HKN17] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Sublinear-timemaintenance of breadth-first spanning trees in partially dynamic networks. ACM Trans.Algorithms , 13(4):51:1–51:24, 2017.[HKNS15] Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and Thatchaphol Sara-nurak. Unifying and strengthening hardness for dynamic problems via the online matrix-vector multiplication conjecture. In Rocco A. Servedio and Ronitt Rubinfeld, editors, Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing,STOC 2015, Portland, OR, USA, June 14-17, 2015 , pages 21–30. ACM, 2015.[HP19] Shang-En Huang and Seth Pettie. Thorup–zwick emulators are universally optimalhopsets. Information Processing Letters , 142:9–13, 2019.[Kin99] Valerie King. Fully dynamic algorithms for maintaining all-pairs shortest paths andtransitive closure in digraphs. In , pages 81–89. IEEE, 1999.[KL20] Adam Karczmarz and Jakub Lacki. Simple label-correcting algorithms for partiallydynamic approximate shortest paths in directed graphs. In . SIAM, 2020.[KS97] Philip N Klein and Sairam Subramanian. A randomized parallel algorithm for single-source shortest paths. Journal of Algorithms , 1997.[MPVX15] Gary L Miller, Richard Peng, Adrian Vladu, and Shen Chen Xu. Improved parallelalgorithms for spanners and hopsets. In Proceedings of the Symposium on Parallelismin Algorithms and Architectures . ACM, 2015.[Nan14] Danupon Nanongkai. Distributed approximation algorithms for weighted shortest paths.In Proceedings of the ACM Symposium on Theory of computing . ACM, 2014.25RZ04] Liam Roditty and Uri Zwick. Dynamic approximate all-pairs shortest paths in undi-rected graphs. In ,pages 499–508. IEEE, 2004.[RZ12] Liam Roditty and Uri Zwick. Dynamic approximate all-pairs shortest paths in undi-rected graphs. SIAM J. Comput. , 41(3):670–683, 2012.[SE81] Yossi Shiloach and Shimon Even. An on-line edge-deletion problem. Journal of theACM (JACM) , 28(1):1–4, 1981.[TZ05] Mikkel Thorup and Uri Zwick. Approximate distance oracles. Journal of the ACM(JACM) , 52(1):1–24, 2005.[TZ06] Mikkel Thorup and Uri Zwick. Spanners and emulators with sublinear distance errors.In Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm ,pages 802–809, 2006. A Static Hopset Properties In this section, we will briefly overview the (static) hopset algorithm of [EN19b] (which is similarto [HP19]). Given a weighted graph G = ( V, E ) , and a parameter k , we first construct a hopsetof size O ( n k − ) and hopbound O ( k/ǫ ) k . We then modify the algorithm in order to get a betterrunning time at the cost of a worse stretch. We define sets V = A ⊇ A ⊇ ... ⊇ A k = ∅ . Let ν = k − . Each set A i +1 is obtained by sampling each element from A i with probability q i = n − i · ν .Hence it can be shown that E [ | A i | ] = n − i − ν .For every vertex u ∈ A i \ A i +1 , let p ( u ) ∈ A i +1 be the closest node to set A i +1 . The bunch isset to B ( u ) = { v ∈ A i : d ( u, v ) < d ( u, A i +1 ) } ∪ { p ( u ) } . Also, we define a set C ( v ) called the clusterof v such that if v ∈ B ( u ) then u ∈ C ( v ) . It can easily be shown that clusters are connected in asense that if a node v ∈ C ( u ) then any node z on the shortest path between v and u is also in C ( u ) .As we will see, this property is important for bounding the running time. The hopset is consistedof adding edges ( u, v ) where v ∈ B ( u ) , and setting the weight to be d ( u, v ) .The algorithm described will lead to a (( k/ǫ ) k , ǫ ) -hopset, but the running time can be as large O ( mn ) . In order to resolve this, [EN19b] proposed an algorithm using modified sampling probabil-ities of q i = max( n − i · ν , n − ρ ) . Using this approach, the number of iterations becomes k + 1 /ρ + 1 ,but the hopbound is also increase to O ( k +1 /ρ +1 ǫ ) O ( k +1 /ρ +1) . We briefly review some the propertiesof this hopset algorithm as discussed in [EN19b, HP19], and will then explain how [EN19b] modifiesthe algorithm to improve the running time.One important component of this algorithm is the modified Dijsktra’s algorithm that we willalso utilize in our dynamic algorithms, and thus we breifly review it. This algorithm was presentedby Thorup-Zwick [TZ05] and it allows us to construct the bunches and clusters for level i in O (( m + n log n ) /q i ) (expected) time. At a high-level this is done by making a modification to Dijkstra. Inthe original Dijkstra for each source u ∈ A i \ A i +1 , at each iteration we consider an unvisited vertex v , and relax each incident edge ( v, z ) by setting d ( u, z ) := min { d ( u, z ) , d ( u, v ) + w ( v, z ) } . But inthe modified algorithm this is done only if d ( u, v ) + w ( v, z ) < d ( z, A i +1 ) . In other words each node z only “participates" in a shortest-path exploration from a source u only if z ∈ B ( u ) . Note that if z ∈ B ( u ) , all the nodes on the shortest path between u and z are considered. This will let us boundthe running time by expected size of B ( u ) . 26 .1 Properties Size. For each u ∈ A i \ A i +1 we have E [ | B ( u ) | ] ≤ /q i . At a high-level, this can be shown byordering vertices based on their distance to u , and noting that this size is the expected number ofvertices visited in A i that is included in A i +1 . This corresponds to a geometric random variablewith parameter q i and thus the expectation is /q i = n i ν . We can use this to show that for all i the number of edges added is in expectation k − X i =1 E [ | A i | ] n i · ν = O ( kn ν ) . Modified Dijsktra’s algorithm. Based on an algorithm in [TZ05] (and used in [EN19b]), thebunches for level i can be constructed in O ( m + n log n ) /q i . At a high-level this is done bymaking a modification to Dijkstra. In the original Dijkstra for each source u ∈ A i \ A i +1 , ateach iteration we consider an unvisited vertex v , and “relax" each incident edge ( v, z ) by set-ting d ( u, z ) = min { d ( u, z ) , d ( u, v ) + w ( v, z ) } . But in the modified algorithm this is done only if d ( u, v ) + w ( v, z ) < d ( z, A i +1 ) . In other words each node z only “participates" in a shortest-pathexploration from a source u only if z ∈ B ( u ) . Note that if z ∈ B ( u ) , all the nodes on the shortestpath between u and z are considered. Since | B ( u ) | ≤ q i , this allows us to bound the running timeby O ( mn ρ ) . A.2 Hopbound and Stretch. In this section we sketch the anlysis of the hopbound and stretch of a simple static hopset algorithm,that does not bound the sampling probabilities by n − ρ . This leads to (almost) optimal size andhopbound tardeoff but has a larger construction time. The extension to the more efficient variantwill be straightforward. The following lemma was proved by [EN19b,HP19]. We give a proof sketchhere. We use a similar idea in our dynamic hopset construction (in combination with monotone EStree and scaling), and hence some of the missing details can be found in proof of Theorem 12. Lemma 21. Fix < δ ≤ / k , and consider a pair x, y ∈ V . Then for ≤ i ≤ k + 1 we haveeither of the following conditions: • d ((3 /δ ) i ) G ∪ H ( x, y ) ≤ (1 + 8 δi ) d G ( x, y ) or, • There exists z ∈ A i +1 such that, d ((3 /δ ) i ) G ∪ H ( x, z ) ≤ d G ( x, y ) . Proof sketch. This can be shown by an induction on i . For the base case of i = 0 , we have three cases.If y ∈ B ( x ) then edge ( x, y ) is in the hopset, and the first condition of the lemma holds. Otherwiseif x ∈ A , then z = x trivially satisfies the second condition. Otherwise we have x ∈ A /A , and bysetting z = p ( x ) we know that there is an edge ( x, z ) ∈ H such that d ( x, z ) ≤ d ( x, y ) by definitionof p ( x ) , and hence the second condition holds.Now assume the claim holds for i . Consider the shortest path π ( x, y ) between x and y . We dividethis path into /δ segments og length roughly δd G ( x, y ) (up to rounding). Using triangle inequalityon all segments we then use the induction hypothesis on each segment. If for all the segments thefirst condition holds for i , then there is a path of (3 /δ ) i +1 -hops consisted of the hopbounded path oneach segment. We can show that this path satisfies the first condition for i + 1 . Now, assume thatthere are at least two segments for which the first condition does not hold for i . Then let [ u ℓ , v ℓ ] bethe first such segment (i.e. closest to x ) and let [ u r , v r ] be the last such segment. Then by inductivehypothesis there are z ℓ , z r ∈ A i +1 such that: 27 d ((3 /δ ) i ) G ∪ H ( u ℓ , z ℓ ) ≤ d ( u ℓ , v ℓ ) , and, • d ((3 /δ ) i ) G ∪ H ( v r , z r ) ≤ d ( u r , v r ) Again, we consider two cases. First, in case z r ∈ B ( z ℓ ) , we have added a single hopset edge between z r and z ℓ with weight d ( z r , z ℓ ) . By applying the inductive hypothesis on segments before [ u ℓ , v ℓ ] ,and after [ u r , v r ] , we have a path with at most (3 /δ ) i for each of these segments, satisfying the firstcondition for the endpoints of the segment. Also, we have a /δ ) i + 1 -hop path going through u ℓ , z ℓ , z r , v r that satisfies the first condition for u ℓ , v r . Putting all of these together, we can showthat there is a path of hopbound (3 /δ ) i +1 satisfying the first condition. To get this we need to usethe fact that the length of each segment is at most δ · d ( x, y ) . We have, d (3 /δ ) ( i +1 ) G ∪ H ( x, y ) ≤ ℓ − X j =1 [ d ((3 /δ ) i ) G ∪ H ( u j , v j ) + d (1) G ( v j , u j +1 )] + d ((3 /δ ) i ) G ∪ H ( u ℓ , z ℓ )+ d (1) H ( z ℓ , z r ) + d ((3 /δ ) i ) G ∪ H ( u r , v r ) + d (1) H ( v r , u r +1 )+ (1 /δ ) X j = r +1 [ d ((3 /δ ) i ) G ∪ H ( u j , v j ) + d (1) G ( v j , u j +1 )] ≤ δd G ( x, y ) + (1 + 8 δi ) d G ( x, y ) ≤ (1 + 8 δ ( i + 1)) d G ( x, y ) Finally, consider the case where z r B ( z ℓ ) . If z ℓ A i +2 , we consider z = p ( z ℓ ) . By definitionwe have added the edge ( z ℓ , z ) to the hopset, we can show that the second condition holds. We usesimilar reasoning as before and also use the fact that we set δ < / k to show that item 2 holds inthis case. The only remaining case is when z ℓ ∈ A i +2 , in a similar but simpler reasoning follows bysetting z = z ℓ .We can now set δ = Θ( k/ǫ ) in Lemma 21, and since A k = ∅ , for i = k − only the first conditioncan hold. Therefore we get a hopbound of β = Θ( k/ǫ ) k . Hopbound in the efficient variant. For more efficient construction, we considered a two phasealgorithm. For the first phase we use similar reasoning as Lemma 21, but in the second phasethe parameters change. The algorithm will require more iterations that will impact the overallhopbound. We require k + 1 /ρ + 1 iterations overall. In the second phase we have δ ′ = ( k + 1 /ρ ) /ǫ and thus we have overall hopbound of O ( k +1 /ρǫ ) k +1 /ρ +1 .By putting everything we have the following guarantees for the static hopset: Theorem 22 ( [EN19b]) . There is an algorithm that given a weighted and undirected graph G =( V, E ) , and ≤ k ≤ log log n − , k − < ρ < computes a ( β, ǫ ) -hopset of size O ( n k − ) , where β = O (( k +1 /ρǫ )) k +1 /ρ +1 . It runs in O ( n ρ ρ ( m + n log n )) expected time. B Details Omitted from Section 3.2 In this section, we explain the restricted hopset algorithm that is mainly based on algorithm of[RZ04] in more detail.We sample sets V = A ⊇ A ⊇ ... ⊇ A i ( ρ ) = ∅ , where i ( ρ ) = k + 1 /ρ + 1 once and they remainthe same during the updates. Next, we need to maintain values d ( v, A i ) , ≤ i ≤ k − for all28odes v ∈ V . This can be performed by computing a shortest path tree rooted at a dummy node s i connected to all nodes in A i . Let ˆ d = (1 + ǫ ) d . We can use the Even-Schiloach [SE81] algorithmup to depth ˆ d to compute all these distances in O ( ˆ dm ) time. The pivots p ( v ) , ∀ v ∈ V can also bemaintained in this process. Maintaining the clusters. Recall that for z ∈ A i \ A i +1 we have v ∈ C ( z ) if and only if d ( z, v ) < d ( v, A i +1 ) . After each deletion, for each node v and the cluster centers z we first checkwhether the distance d ( z, v ) has increased. If d ( z, v ) ≥ d ( v, A i +1 ) , v will be removed from C ( z ) .The more subtle part is adding nodes to new clusters. For each ≤ i < k , we define a set X i consisted of all vertices whose distance to A i is increased as a result of a deletion, but where thisdistance is still at most ˆ d . The sets X i can be computed while maintaining d ( v, A i ) .Note that a node v would join C ( w ) only after an increase in d ( v, A i +1 ) . Using this observation,after each deletion for every v ∈ X i +1 , z ∈ B i ( u ) \ B i ( v ) , and each edge ( u, v ) ∈ E we check if d ( z, u ) + w ( u, v ) < d ( v, A i +1 ) . If yes, then v joins C ( z ) . We push v to a priority queue Q ( z ) withkey d ( z, u ) + w ( u, v ) . If v was already in the queue the key will be updated if this distance is smallerthan the existing estimate. In this case we mark v . The marked nodes join clusters z , but theremay be other nodes that also need to join C ( z ) as a result of this change.Hence after this initial phase, for each z ∈ A i \ A i +1 where Q ( z ) = ∅ , we run the modifiedDijkstra’s algorithm. Recall that in the modified Dijkstra’s algorithm when we explore neighborsof a node x , we only relax an edge ( x, y ) if d ( x, y ) + w ( x, y ) < d ( x, A i ) . Then [RZ04] show thatthis process correctly maintains the clusters. We then repeat this process for all the k + 1 /ρ + 1 iterations. We add a hopset edge between each z ∈ A i \ A i +1 and all nodes v ∈ C ( z ) and set theweight of this edge to w ( v, z ) = d G ( v, z ) . B.1 Proof of Lemma 9 Lemma 23. For every v ∈ V and ≤ i ≤ k − , the expected total number of times the edgesincident on v are scanned over all trees for each w ∈ A i (i.e. trees on C ( w ) ) is O ( ˆ d/q i ) , where q i isthe sub-sampling probability.Proof. Let w ∈ A i \ A i +1 . The edges of a node v ∈ V is scanned when v joins C ( w ) , and any time d ( v, w ) is increased until v leaves C ( w ) . We start by analyzing the total cost of joining new clusters.Recall that C ( w ) = { v ∈ V : d ( v, w ) < d ( w, A i +1 ) } . Since we are in a decremental setting, v canjoin C ( w ) only when d ( w, A i +1 ) increases, and this can happen at most ˆ d times per tree . As in thestatic setting, at any time, v joins at most ˜ O (1 /q i ) trees, since the number of clusters v belongs tois dominated by a geometric random variable with parameter q i . We will use a similar argumentfor analyzing the total number of clusters each node belongs to over time. Hence the total timefor nodes joining new clusters is ˜ O ( ˆ dm/q i ) . Next, we consider the case when after the deletion thedistance between v and the center increases. This will let us bound the number of times the edgesincident on v are scanned for a tree rooted at some node in A i . Let d t ( w, v ) denote the distancebetween v and w at time t (after t deletions), and let C t ( w ) denote the cluster rooted at w at time t .We bound the number of indices t for which v ∈ C t ( w ) and d t ( w, v ) < d t +1 ( w, v ) . Let w t, , w t, , ... be the sequence of nodes in A i sorted based on their distance from v at time t . Ties will be brokenby ordering based on pairs ( d t ( v, w ) , d t +1 ( v, w )) , i.e. nodes with the same distance from v at time t will be sorted based on their distance at time t + 1 . This ensures that if d t ( v, w t,j ) < d t +1 ( v, w t,j ) ,then d t ( v, w t,j ) < d t +1 ( v, w t +1 ,j ) . Same as before Pr[ v ∈ C t ( w t,j )] ≤ (1 − q i ) j − , since v ∈ C t ( w t , j ) only if for all j ′ < j we have w t,j ′ ∈ A i \ A i +1 . Let I = { ( t, j ) | d t ( v, w t,j ) < d t +1 ( v, w t,j ) ≤ ˆ d } .Then since edges incident to v are scanned only if their distance increases, the expected number of29imes they are scanned over all trees rooted at centers in A i is at most P ( t,j ) Pr[ v ∈ C t ( w t,j )] . Also,by definition for a fixed j there can be at most ˆ d pairs of form ( t, j ) . In other words, the distanceto the j -th closest vertex can increase at most ˆ d times, and hence, X ( t,j ) Pr[ v ∈ C t ( w t,j )] ≤ ˆ d X j ≥ (1 − q i ) j − ≤ ˆ d/q i ..