The Randomized Competitive Ratio of Weighted k-server is at least Exponential
aa r X i v : . [ c s . D S ] F e b The Randomized Competitive Ratio of Weighted k -serveris at least Exponential Nikhil AyyadevaraIIT [email protected] Ashish ChiplunkarIIT [email protected]
Abstract
The weighted k -server problem is a natural generalization of the k -server problem in which the costincurred in moving a server is the distance traveled times the weight of the server. Even after almostthree decades since the seminal work of Fiat and Ricklin (1994), the competitive ratio of this problemremains poorly understood, even on the simplest class of metric spaces – the uniform metric spaces.In particular, in the case of randomized algorithms against the oblivious adversary, neither a betterupper bound that the doubly exponential deterministic upper bound, nor a better lower bound than thelogarithmic lower bound of unweighted k -server, is known. In this article, we make significant progresstowards understanding the randomized competitive ratio of weighted k -server on uniform metrics. Wecut down the triply exponential gap between the upper and lower bound to a singly exponential gap byproving that the competitive ratio is at least exponential in k , substantially improving on the previouslyknown lower bound of about ln k . The k -server problem of Manasse, McGeoch, and Sleator [11] is one of the cleanest, simple-looking, andyet profound problems in online computation, and has been actively studied since over three decades. The k -server problem concerns deciding movements of k mobile servers on an underlying metric space to servea sequence of online requests. Each request is issued at some point of the metric space, and to serve sucha request, a server must move to the requested point (unless a server is already present there). The costincurred in the movement of a server is equal to the distance through which the server moves, and the goalis to minimize the total cost.Since an online algorithm is required to take its decisions only based on the past inputs, it cannot outputthe optimal solution, in general. An online algorithm for a minimization problem is said to be c - competitive if, on any instance, it produces a solution whose (expected) cost is at most c times the cost of the optimumsolution. The competitive ratio of an algorithm is the minimum (technically, the infimum) c such that thealgorithm is c -competitive. The deterministic (resp. randomized) competitive ratio of an online minimiza-tion problem is the minimum (technically, the infimum) c for which a c -competitive deterministic (resp.randomized) algorithm exists. Note that, unless otherwise specified, we assume that in case of randomizedalgorithms, the adversarial input is oblivious , that is, constructed with the knowledge of the algorithm butwithout knowledge of the random choices it makes.In their seminal work, Manasse, McGeoch, and Sleator [11] proved the deterministic competitive ratioof the k -server problem is at least k on every metric with more than k points. They conjectured that thedeterministic competitive ratio is, in fact, equal to k on any metric. This conjecture is popularly calledthe (deterministic) k -server conjecture and it remains unresolved to date. The deterministic algorithm withthe best known competitive ratio of 2 k − O (log k ). The k -server problem on uniformmetric spaces is particularly interesting because it is equivalent to the paging problem. In this case, severaldeterministic algorithms including Least-Recently-Used (LRU) and First-In-First-Out (FIFO) are known to1e k -competitive. The randomized competitive ratio is known to be exactly H ( k ) = P ki =1 /i ≈ ln k , wherethe lower bound is due to Fiat et al. [7] and the upper bound is due to [12, 1].The weighted k -server problem is a natural generalization of the k -server problem where the objectiveis to minimize the weighted sum of the movements of servers. Specifically, the k servers have weights β ≤ · · · ≤ β k , and the cost of moving the i ’th server is β i times the distance through which it moves. It iseasy to see that a c -competitive k -server algorithm has competitive ratio at most cβ k /β for the weighted k -server problem, and therefore, the challenge is to design an algorithm with competitive ratio independentof the servers’ weights. Surprisingly, this simple-looking introduction of weights into the k -server problemmakes it incredibly difficult, and a competitive algorithm is known only for k ≤ k = 1 case istrivial). k -server on Uniform Metrics Owing to the difficulty of the weighted k -server problem on general metrics, the problem becomes particularyinteresting on uniform metrics. The weighted k -server problem on uniform metric spaces models the pagingproblem where the cost of page replacement is determined by the location where the replacement takes place.The seminal paper of Fiat and Ricklin [8] gave a deterministic algorithm for weighted k -server on uniformmetrics whose competitive ratio is doubly exponential in k : about 3 k / specifically, but can be improved to2 k +2 = 16 k due to the result of Bansal et al. [3] for a more general problem. The fact that the deterministiccompetitive ratio is indeed doubly exponential in k was established only recently by Bansal et al. [2], whoproved a lower bound of 2 k − , improving the previously known lower bound of ( k + 1)! / k -server problem on uniform metrics is byChiplunkar and Vishwanathan [5], and it also achieves a doubly exponential competitive ratio of about c k for c ≈ . randomized memoryless algorithm generalizing the algorithm by Chrobakand Sgall [6] for k = 2, and it acheives the competitive ratio against a stronger adversary called an adaptiveonline adversary . Chiplunkar and Vishwanathan also proved that no randomized memoryless algorithm canachieve a better competitive ratio against adaptive online adversaries. However, even when an algorithmis allowed to use both memory and randomness, and the adversary is oblivious, no better upper bound isknown. More embarrassingly, no better lower bound than the logarithmic lower bound of k -server on uniformmetrics is known, thus, leaving a triply exponential gap between the upper and lower bounds.In this paper, we cut down the triply exponential gap between the best known bounds on the randomizedcompetitive ratio of weighted k -server on uniform metrics by a doubly exponential improvement in the lowerbound. We prove, Theorem 1.
The competitive ratio of any randomized algorithm for weighted k -server on uniform metricsis at least exponential in k , even when the algorithm is allowed to use memory and the adversary is oblivious. Due to our result, we now have only an exponential gap between the upper and lower bounds on therandomized competitive ratio of weighted k -server on uniform metrics. Our proof of the randomized lower bound for weighted k -server is largely inspired by the proof of thedeterministic lower bound by Bansal et al. [2] and we reuse several of their ideas. At a high level, bothproofs give adversaries which run recursively defined strategies. Both proofs crucially rely on identicallydefined set systems Q satisfying certain properties (see Lemma 1). Both proofs use the same idea fordeciding the adversary’s server movements. However, our proof differs from the one by Bansal et al. in thefollowing aspects.1. The adversary in the deterministic lower bound proof is able to carefully pick from Q a set of pointsthat does not contain points covered by the algorithm’s heavier servers, and run its strategy on thatset. In contrast, our adversary is oblivious and is unable to see the positions of the algorithm’s servers.Therefore, it merely picks a random set from Q and hopes that none of the points in that set is coveredby the algorithm’s heavier servers. 2. The strategy of Bansal et al. to defeat deterministic algorithms ensures that whenever an adversary’sserver other than the heaviest moves, it is accompanied by an eventual movement of a heavier serverof the algorithm. Therefore, assuming that the weights of the servers are separated by a sufficientlylarge constant independent of k , they are able to ignore the movements of all of the adversary’s serversexcept the heaviest. On the other hand, we are unable to charge the movement of an adversary’sserver to the movement of an algorithm’s heavier server. As a result, we need to carefully track thecontributions of all k servers towards the adversary’s cost and ensure that their weights are separatedby factors dependent on k . Let the weights of the k servers be 1 , β, β , . . . , β k − for some large integer β . Define the sequence n , n , . . . inductively as follows. n = 1, and for i > n i = (cid:16)l n i − m + 1 (cid:17) · (cid:16)j n i − k + 1 (cid:17) .Observe that n k grows doubly exponentially with respect to k . Let H denote the harmonic function, thatis, H ( n ) = P ni =1 /i . It is known that H ( n ) ≥ ln n . We will establish Theorem 1 by proving the followingbound. Theorem 2.
The randomized competitive ratio of weighted k -server on uniform metric spaces is at least H ( n k − ) . We use the following version of Yao’s principle to prove the above bound.
Fact 1 (Yao’s principle) . Suppose there exists a distribution D on the inputs of an online minimizationproblem such that for every deterministic online algorithm A we have E I ∼D [ A ( I )] − α · E I ∼D [ OPT ( I )] > ,where A ( I ) is the cost of the algorithm’s solution and OPT ( I ) is the cost of an optimal solution to instance I . Then the problem does not have an α -competitive randomized online algorithm. Thus, in order to prove Theorem 2, our task is exhibit a distribution on inputs of weighted k -server ona uniform metric space such that the expected cost of any deterministic online algorithm is greater than H ( n k − ) times the expectation of the optimum cost.In our proof, we use a set-theoretic result with a constructive proof given by Bansal et al. [2] and usedcrucially in their deterministic lower bound. We reproduce its proof in the Appendix for completeness. Theresult is as follows. Lemma 1.
Let ℓ > and let P be a set of n ℓ points. There exists a set system Q ⊆ P satisfying thefollowing properties.1. Q contains ⌈ n ℓ − / ⌉ + 1 sets, each of size n ℓ − .2. For every p ∈ P , there exists a set in Q not containing p .3. For every p ∈ P there exists a q ∈ P such that every set in Q contains at least one of p and q . Adversary’s Strategy and Analysis
Our adversarial input distribution is generated by the procedure adversary which uses a recursive procedure strategy , an oblivious version of its counterpart in Bansal et al. [2]. These procedures are defined as follows.
Procedure 1: adversary ( k )Take a uniform metric space S with n k − + 1 unmarked points; while true do Pick p uniformly at random from S and mark p ; if not all points are marked then Call strategy ( k − , P \ { p } ); else Break;
Procedure 2: strategy ( ℓ, P ) (Promise: | P | = n ℓ ) if ℓ = 0 (and therefore, | P | = n = 1 ) then Request the unique point in P ; else Construct the set system
Q ⊆ P using Lemma 1; repeat β · ( ⌈ n ℓ − / ⌉ + 1) times Pick a set P ′ uniformly at random from Q (independent of all previous random choices);Call strategy ( ℓ − , P ′ );Procedure strategy gets as input a non-negative number ℓ and a set P of n ℓ points. In the base casewhere ℓ = 0, the procedure issues a request to the unique point in P . In the inductive case where ℓ >
0, theprocedure constructs a set system Q with properties stated in Lemma 1. It then repeatedly gives recursivecalls passing ℓ − ℓ , on sets chosen uniformly at random from Q . Recall that these sets havesize n ℓ − , as required. Procedure adversary takes a uniform metric space on n k − + 1 points and runs acoupon-collector-like random experiment: repeatedly pick a point p uniformly at random, mark it, and ifthere are points yet to be unmarked, call the procedure strategy on the set of points other than p . By astandard coupon collector argument, the expected number of times p is sampled before all points in P getmarked is ( n k − + 1) · H ( n k − + 1), where H is the harmonic function. The number of strategy ( k − , P \ p )calls is one less than this. Thus, Observation 1.
The expected number of strategy ( k − , P \ { p } ) calls made by adversary ( k ) is ( n k − + 1) · H ( n k − + 1) − . Let us first bound from below the expected cost of the algorithm in one run of the procedure adversary .This bound will be independent of the initial positions of the algorithm’s servers. For this, let us first boundthe algorithm’s expected cost in serving requests from one execution of procedure strategy ( ℓ, P ). Lemma 2.
Suppose that none of the algorithm’s servers except possibly the ℓ lightest ones occupy points in P at the time strategy ( ℓ, P ) is called. Then the expected cost of the algorithm in serving requests given inthis call is at least β ℓ .Proof. We prove the claim by induction on ℓ . For the base case, suppose ℓ = 0. Then | P | = 1, and we areassured that none of the algorithm’s servers occupy the point in P . Therefore, to serve the one request givenby strategy (0 , P ), the algorithm must move one of its servers and incur cost at least 1.For the inductive case, suppose ℓ >
0. We are assured that except for the lightest ℓ servers, none of theservers of the algorithm occupy points in P . If the algorithm moves some server other than the ℓ lightestservers to serve requests given by strategy ( ℓ, P ), then it incurs cost at least β ℓ , the weight of the ( ℓ + 1)’thlightest server, as required. Therefore, we assume for the remainder of this proof that the algorithm onlymoves its ℓ lightest servers to serve requests given by strategy ( ℓ, P ).The strategy ( ℓ, P ) call recursively calls strategy ( ℓ − , P ′ ) repeatedly, where P ′ ⊆ P . Consider any onesuch call. Except for the lightest ℓ servers, all servers of the algorithm occupy points outside P , and none4f them move. Therefore, they all occupy points outside P ′ . Next, consider the point r occupied by thealgorithm’s ℓ ’th lightest server. While this could very well be in P (and could change over time), even if r ∈ P , there exists a set in Q that doesn’t contain r , by Lemma 1. Since P ′ is a uniformly random set in Q ,we have, Pr P ′ ∼Q [ r / ∈ P ′ ] ≥ | Q | = 1 ⌈ n ℓ − / ⌉ + 1 .Conditioned on the event r / ∈ P ′ , the set P ′ doesn’t contain any of the algorithm’s servers except the ℓ − r / ∈ P ′ , the expected cost of the algorithmto serve requests given by the strategy ( ℓ − , P ′ ) call is at least β ℓ − . Since the strategy ( ℓ, P ) call issues β · ( ⌈ n ℓ − / ⌉ + 1) recursive calls, the expected cost of the algorithm in serving requests given by strategy ( ℓ, P )is at least β · (cid:16)l n ℓ − m + 1 (cid:17) · ⌈ n ℓ − / ⌉ + 1 · β ℓ − = β ℓ ,as required.Now we are ready to bound the algorithm’s expected cost on the entire input. Lemma 3.
The expected cost of the algorithm in serving requests given by an adversary ( k ) call is at least H ( n k − ) · β k − .Proof. Consider any strategy ( k − , P \ p ) call, where p is a uniformly random point in P . Let r be thelocation of the algorithm’s heaviest server just before this call. Then Pr[ r / ∈ P \ { p } ] = Pr[ p = r ] = 1 / | P | .By Lemma 2, conditioned on r / ∈ P \ { p } , the expected cost of the algorithm in serving requests given by the strategy ( k − , P \ p ) call is at least β k − . The number of calls strategy ( k − , P \ p ) made by adversary ( k ) isindependent of the algorithm’s cost in each such call, and its expectation is ( n k − + 1) · H ( n k − + 1) −
1, byObservation 1. Thus, the expected cost of the algorithm in serving requests given by an adversary ( k ) call isat least[( n k − + 1) · H ( n k − + 1) − · | P | · β k − = (cid:18) H ( n k − + 1) − n k − + 1 (cid:19) · β k − = H ( n k − ) · β k − ,where we used the fact that | P | = n k − + 1 in the first equality.Let us now turn our attention towards the adversary’s cost. We will show how the adversary, having theability to see the future requests, can ensure that whenever strategy ( ℓ, P ) is called, it has at least one serverother than its ℓ lightest servers occupying a point in P already. On the contrary, recall that in Lemma 2,we relied on the algorithm not having any of its servers except the ℓ lightest ones occupying points in P atthe time strategy ( ℓ, P ) is called. Intuitively, the adversary is able to obtain advantage over the algorithm byhaving one server other than the ℓ lightest ones in P whereas the algorithm has none. Lemma 4.
Define the sequence c , c , . . . inductively as follows: c = 0 , and for ℓ > , c ℓ = β ℓ − + β · ( ⌈ n ℓ − / ⌉ + 1) · c ℓ − .Suppose that the adversary has at least one server other than its ℓ lightest servers occupying some point in P at the time strategy ( ℓ, P ) is called. Then the adversary is able to serve all requests given in this call withcost at most c ℓ .Proof. We prove the claim by induction on ℓ . For the base case, suppose ℓ = 0. Then | P | = 1 and byassumption, the adversary has at least one server at the unique point in P . Therefore, the adversary canserve the unique request given by strategy (0 , P ) with cost c = 0.For the inductive case, suppose ℓ >
0. We have assumed that the adversary has at least one server otherthan its lightest ℓ servers occupying some point in p ∈ P . By the third property of the set system Q fromLemma 1, there exists a point q ∈ P such that each set in Q contains at least one of p and q . The adversarymoves its ℓ ’th lightest server to such a point q , thus incurring cost β ℓ − , the first term in the definition of c ℓ . As a result, both p and q become occupied by the adversary’s servers other than the ℓ − strategy ( ℓ, P ). The set P ′ ∈ Q on which this call is madecontains at least one of p and q , both of which are occupied by the adversary’s servers other than the ℓ − strategy ( ℓ − , P ′ ) with cost at most c ℓ − . Since the number of such recursive calls is β · ( ⌈ n ℓ − / ⌉ + 1), theadversary can serve all requests made in these calls with cost at most β · ( ⌈ n ℓ − / ⌉ + 1) · c ℓ − , the secondterm in the expression for c ℓ .Now we are ready to bound the adversary’s expected cost on the entire input. Lemma 5.
The expected cost of the adversary in serving requests given by an adversary ( k ) call is at most β k − + (( n k − + 1) · H ( n k − + 1) − · c k − .Proof. Let q be the last point to get marked. The adversary moves its heaviest server to q , thus incurringcost β k − . Note that all strategy ( k − , P \ { p } ) calls made by adversary ( k ) are such that p = q , and therefore, q ∈ P \ { p } . Thus, the adversary’s heaviest server occupies a point in P ′ at the time each strategy ( k − , P ′ )call is made. By Lemma 4, the adversary incurs cost at most c k − in each such call. By Observation 1, theexpected number of such calls is ( n k − + 1) · H ( n k − + 1) −
1. Thus, we get the claimed upper bound.We finally use Lemma 3 and Lemma 5 to arrive at the lower bound given by Theorem 2.
Proof of Theorem 2.
Unrolling the recurrence in the statement of Lemma 4, we get, c k − = β k − · k − X i =1 k − Y j = i (cid:16)l n j m + 1 (cid:17) ,and therefore, by Lemma 5, the adversary’s cost in serving requests given by an adversary ( k ) call is at most β k − β · (( n k − + 1) · H ( n k − + 1) − · k − X i =1 k − Y j = i (cid:16)l n j m + 1 (cid:17) .Let ε be an arbitrarily small number. By choosing β = (cid:24) ε (cid:25) · (( n k − + 1) · H ( n k − + 1) − · k − X i =1 k − Y j = i (cid:16)l n j m + 1 (cid:17) ,the adversary’s cost is bounded from above by β k − · (1+ ε ). On the other hand, by Lemma 3, the algorithm’scost in serving requests given by an adversary ( k ) call is at least H ( n k − ) · β k − . Since ε is arbitrarily small, weuse Fact 1 to conclude that the competitive ratio of any randomized online algorithm for weighted k -serveron uniform metrics is at least H ( n k − ). Given our lower bound on the randomized competitive ratio of weighted k -server on uniform metric spaces,the gap between the known upper and lower bounds has reduced from three orders of exponentiation to one.The natural question that needs to be investigated is to determine the randomized competitive ratio, or atleast, prove upper and lower bounds that match in the order of exponentiation.Our result also sheds light on the randomized competitive ratio of a generalization of the weighted k -server problem on uniform metrics called the generalized k -server problem on weighted uniform metrics. Inthis problem k servers are restricted to move in k different uniform metric spaces that are scaled copies ofone another. A request contains one point from each copy and to serve it, one of the points must be coveredby the server moving in its copy. Our lower bound directly applies to the generalized k -server problemon weighted uniform metrics and improves the previously known lower bound of Ω( k/ log k ) by Bansalet al. [3] to doubly exponential in k . This also proves that the generalized k -server problem on weighteduniform metrics is qualitatively harder than its unweighed counterpart, the generalized k -server problem onuniform metrics, which has randomized competitive ratio O ( k log k ), also due to Bansal et al. [3]. This bound, in fact, holds for the unweighted counterpart, and to the best of the authors’ knowledge, no better bound forthe weighted problem was known. eferences [1] Dimitris Achlioptas, Marek Chrobak, and John Noga. Competitive analysis of randomized pagingalgorithms. Theor. Comput. Sci. , 234(1-2):203–218, 2000.[2] Nikhil Bansal, Marek Eli´as, and Grigorios Koumoutsos. Weighted k-server bounds via combinatorialdichotomies. In
FOCS , pages 493–504, 2017.[3] Nikhil Bansal, Marek Eli´as, Grigorios Koumoutsos, and Jesper Nederlof. Competitive algorithms forgeneralized k -server in uniform metrics. In SODA , pages 992–1001, 2018.[4] S´ebastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Madry. k-server viamultiscale entropic regularization. In
STOC , pages 3–16, 2018.[5] Ashish Chiplunkar and Sundar Vishwanathan. Randomized memoryless algorithms for the weightedand the generalized k -server problems. ACM Trans. Algorithms , 16(1):14:1–14:28, 2020.[6] Marek Chrobak and Jiˇr´ı Sgall. The weighted 2-server problem.
Theoretical Computer Science , 324(2-3):289–312, 2004.[7] Amos Fiat, Richard M. Karp, Michael Luby, Lyle A. McGeoch, Daniel Dominic Sleator, and Neal E.Young. Competitive paging algorithms.
J. Algorithms , 12(4):685–699, 1991.[8] Amos Fiat and Moty Ricklin. Competitive algorithms for the weighted server problem.
TheoreticalComputer Science , 130(1):85–99, 1994.[9] Elias Koutsoupias and Christos H. Papadimitriou. On the k -server conjecture. J. ACM , 42(5):971–983,1995.[10] James R. Lee. Fusible hsts and the randomized k-server conjecture. In
FOCS , pages 438–449, 2018.[11] Mark S. Manasse, Lyle A. McGeoch, and Daniel Dominic Sleator. Competitive algorithms for on-lineproblems. In
STOC , pages 322–333, 1988.[12] Lyle A. McGeoch and Daniel Dominic Sleator. A strongly competitive randomized paging algorithm.
Algorithmica , 6(6):816–825, 1991.[13] Ren´e Sitters. The generalized work function algorithm is competitive for the generalized 2-server prob-lem.
SIAM J. Comput. , 43(1):96–125, 2014.
A Set-system Construction
Proof of Lemma 1.
Construct the set system Q as follows. Recall that | P | = n ℓ = (cid:16)l n ℓ − m + 1 (cid:17) · (cid:16)j n ℓ − k + 1 (cid:17) .Let M be an arbitrary subset of P having size ⌈ n ℓ − / ⌉ + 1, so that | P \ M | = (cid:16)l n ℓ − m + 1 (cid:17) · j n ℓ − k .Partition P \ M into ⌈ n ℓ − / ⌉ + 1 sets of size ⌊ n ℓ − / ⌋ each, and for each r ∈ M , name a distinct set in thepartition Q r . Next, for each r ∈ M , define P r = ( M \ { r } ) ∪ Q r , and let Q = { P r | r ∈ M } .We now prove that Q indeed satisfies the required properties. First, the size of each set P r ∈ Q is | P r | = | M | − | Q r | = l n ℓ − m + j n ℓ − k = n ℓ − .For the second property, observe that a point p ∈ M is not contained in the corresponding set P p ∈ Q ,whereas for a point p ∈ Q r , the only set in Q that contains p is P r . For the third property, if p ∈ M , define q to be any other point in M , and if p ∈ Q r , define q = rr