Fast Classical and Quantum Algorithms for Online k -server Problem on Trees
Ruslan Kapralov, Kamil Khadiev, Joshua Mokut, Yixin Shen, Maxim Yagafarov
aa r X i v : . [ c s . D S ] A ug Fast Classical and Quantum Algorithms forOnline k -server Problem on Trees Ruslan Kapralov , Kamil Khadiev , , Joshua Mokut , Yixin Shen , andMaxim Yagafarov Smart Quantum Technologies Ltd., Kazan, Russia Kazan Federal University, Kazan, Russia Universit´e de Paris, CNRS, IRIF, F-75006 Paris, France [email protected]
Abstract.
We consider online algorithms for the k -server problem ontrees. Chrobak and Larmore proposed a k -competitive algorithm for thisproblem that has the optimal competitive ratio. However, a naive im-plementation of their algorithm has O ( n ) time complexity for processingeach query, where n is the number of nodes in the tree. We propose a newtime-efficient implementation of this algorithm that has O ( n log n ) timecomplexity for preprocessing and O (cid:0) k + k · log n (cid:1) time for processing aquery. We also propose a quantum algorithm for the case where the nodesof the tree are presented using string paths. In this case, no preprocess-ing is needed, and the time complexity for each query is O ( k √ n log n ).When the number of queries is o (cid:16) √ nk log n (cid:17) , we obtain a quantum speed-up on the total runtime compared to our classical algorithm.Our algorithm builds on a result of independent interest: we give a quan-tum algorithm to find the first marked element in a collection of m ob-jects, that works even in the presence of two-sided bounded errors on theinput oracle. It has worst-case complexity O ( √ m ). In the particular caseof one-sided errors on the input, it has expected time complexity O ( √ x )where x is the position of the first marked element. Keywords: online algorithms, k-server problem, tree, time complexity,quantum computing, binary search
Online optimization is a field of optimization theory that deals with optimizationproblems having no knowledge of the future [23]. An online algorithm reads aninput piece by piece and returns an answer piece by piece immediately, even if theanswer can depend on future pieces of the input. The goal is to return an answerthat minimizes an objective function (the cost of the output). The most standardmethod to define the effectiveness of an online algorithm is the competitiveratio [27,20]. The competitive ratio is the approximation ratio achieved by thealgorithm. That is the worst-case ratio between the cost of the solution foundby the algorithm and the cost of an optimal solution.In the general setting, online algorithms have unlimited computational power.Nevertheless, many papers consider them with different restrictions. Some ofhem are restrictions on memory [6,16,10,21,2,5,19], others are restrictions ontime complexity [15,26].In this paper, we focus on efficient online algorithms in terms of time com-plexity. We consider the k -server problem on trees. Chrobak and Larmore [12]proposed an k -competitive algorithm for this problem that has the optimal com-petitive ratio. The existing implementation of their algorithm has O ( n ) timecomplexity for each query, where n is the number of nodes in the tree. For gen-eral graphs, there exists a time-efficient algorithm for the k -server problem [26]that uses min-cost-max-flow algorithms. However, it is too slow to apply it tothe case of a tree. In the case of a tree, there exists an algorithm with timecomplexity O ( n ) for preprocessing and O (cid:0) k (log n ) (cid:1) for each query [22].We propose a new time-efficient implementation of the algorithm from [12].It has O ( n log n ) time complexity for preprocessing and O (cid:0) k + k log n (cid:1) forprocessing a query. It is based on fast algorithms for computing Lowest CommonAncestor (LCA) [9,7] and the binary lifting technique [8]. Compared to [22], theidea of our algorithm is simpler: it has less efficient preprocessing and moreefficient processing of a query when k = o (cid:0) (log n ) (cid:1) .We revisit the problem of finding the first marked element in a collection of m objects. It is well-known that it can be solved in expected time O ( √ m ) whengiven quantum oracle access to the input, and even expected time O ( √ x ) where x is the position of the first marked element [24, Theorem 10]. However, thisalgorithm has a small probability of taking time O ( m ) because of the proper-ties of D¨urr-Høyer minimum finding algorithm [14] on which is based [24]. Weimprove upon the state of the art in two ways: we give a worst-case O ( √ m )time algorithm that works even in the presence of two-sided bounded errors inthe input. We also provide an expected time O ( √ x ) time algorithm in the casewhere the input has one-sided errors only. Compared to the algorithm of [24],our algorithm has worst-case complexity O ( √ m ). The technique that we proposeis interesting by itself. It can also be used for boosting the success probabilityof binary search for a function with errors.We also consider the k -server problem in the case where the description ofthe tree is given by a string path. The string path of a node in a rooted tree is asequence of length h , where h is the height of the node, describing the path fromthe root to the node. It is possible to access a node by its path and get a path fora node. Such a way of representing the trees is useful, for example, as a path to afile in file systems. We leverage our classical algorithm for the k -server problem,and we improve a quantum search algorithm to obtain a quantum algorithmwith O ( k √ n ) running time for processing a query, without prepossessing. Inthe case of o (cid:16) √ nk log n (cid:17) queries, the total runtime of the quantum algorithm issmaller than the classical one.The structure of the paper is the following. Section 2 contains preliminaries.The classical algorithm is described in Section 3. Section 4 contains our improvedquantum search algorithm. The quantum algorithm for the k -server problem isdescribed in Section 5. 2 Preliminaries consists of a set I of inputs and a cost func-tion. Each input I = ( x , . . . , x n ) is a sequence of requests, where n is the lengthof the input | I | = n . Furthermore, a set of feasible outputs (or solutions) O ( I )is associated with each I ; an output is a sequence of answers O = ( y , . . . , y n ).The cost function assigns a positive real value cost ( I, O ) to I ∈ I and O ∈ O ( I ).An optimal solution for I ∈ I is O opt ( I ) = arg min O ∈O ( I ) cost ( I, O ).Let us define an online algorithm for this problem.
A deterministic onlinealgorithm A computes the output sequence A ( I ) = ( y , . . . , y n ) such that y i iscomputed based on x , . . . , x i . We say that A is c - competitive if there exists aconstant α ≥ n and for any input I of size n , we have: cost ( I, A ( I )) ≤ c · cost ( I, O
Opt ( I )) + α . The minimal c that satisfies the previouscondition is called the competitive ratio of A . Let us consider a rooted tree G = ( V, E ), where V is the set of nodes (vertices),and E is the set of edges. Let n = | V | be the number of nodes, or equivalentlythe size of the tree. We denote by 1 the root of the tree. A path P is a sequenceof nodes ( v , . . . , v h ) that are connected by edges, i.e. ( v i , v i +1 ) ∈ E for all i ∈ { , . . . , h − } , such that there are no duplicates among v , . . . , v h . Here h isa length of the path. The distance dist( v, u ) between two nodes v and u is thelength of the path between them. For each node v we can define a parent node Parent ( v ) such that dist(1 , Parent ( v )) + 1 = dist(1 , v ). Additionally, we candefine the set of children Children ( v ) = { u : Parent ( u ) = v } . Lowest Common Ancestor (LCA).
Given two nodes u and v of a rooted tree,the Lowest Common Ancestor is the node w such that w is an ancestor of both u and v , and w is the closest one to u and v among all such ancestors. Thefollowing result is well-known. Lemma 1 ([9,7]).
There is an algorithm for LCA problem with the followingproperties: – The time complexity of the preprocessing step is O ( n ) – The time complexity of computing LCA for two vertices is O (1) . We call
LCA Preprocessing () the subroutine that does the preprocessingfor the algorithm and
LCA ( u, v ) that computes the LCA of two nodes u and v . Binary Lifting Technique.
This technique from [8] allows us to obtain a vertex v ′ that is at distance z from a vertex v with O (log n ) time complexity. There aretwo procedures: 3 BL Preprocessing () prepares the required data structures. The time com-plexity is O ( n log n ). – MoveUp ( v, z ) returns a vertex v ′ on the path from v to the root and atdistance dist( v ′ , v ) = z . The time complexity is O (log n ).The technique is well documented in the literature. We present an implemen-tation in the Appendix A for completeness. k -server Problem on Trees Let G = ( V, E ) be a rooted tree, and we are given k servers that can move amongnodes of G . At each time slot, a query q ∈ V appears. We have to “serve” thisquery, that is, to choose one of the k servers and move it to q . The other serversare also allowed to move. The cost function is the distance by which we movethe servers. In other words, if before the request, the servers are at positions v , . . . , v k and after the request they are at v ′ , . . . , v ′ k , then q ∈ { v ′ , . . . , v ′ k } andthe cost of the move is P ki =1 dist( v i , v ′ i ). The problem is to design a strategythat minimizes the cost of servicing a sequence of queries given online. We use the standard form of the quantum query model. Let f : D → { , } , D ⊆{ , } m be an m variable function. We wish to compute on an input x ∈ D .We are given an oracle access to the input x , ie. it is realized by a specificunitary transformation usually defined as | i i| z i| w i → | i i| z + x i (mod 2) i| w i where the | i i register indicates the index of the variable we are querying, | z i is theoutput register, and | w i is some auxiliary work-space. An algorithm in the querymodel consists of alternating applications of arbitrary unitaries independent ofthe input and the query unitary, and a measurement in the end. The smallestnumber of queries for an algorithm that outputs f ( x ) with probability ≥ onall x is called the quantum query complexity of the function f and is denoted by Q ( f ). We refer the readers to [25,3,1] for more details on quantum computing.In the quantum algorithms in this article, to avoid any ambiguity with queries from k -server problem’s definition, we refer to the quantum query complexity asthe quantum time complexity. However, both notions are usually different. Forinstance, in our algorithms, we use some modifications of Grover’s search algo-rithm (see next section), which time complexity differs from query complexityin a logarithmic factor. Grover’s algorithm for quantum search
Definition 1 (Search problem).
Suppose we have a set of objects named { , , . . . , m } , of which some are targets. Suppose O is an oracle that identi-fies the targets. The goal of a search problem is to find a target i ∈ { , , . . . , m } by making queries to the oracle O .
4n search problems, one will try to minimize the number of queries to theoracle. In the classical setting, one needs O ( m ) queries to solve such a problem.Grover, on the other hand, constructed a quantum algorithm that solves thesearch problem with only O ( √ m ) queries [17], provided that there is a uniquetarget. When the number of targets is unknown, Brassard et al. designed a mod-ified Grover algorithm that solves the search problem with O ( √ m ) queries [11],which is of the same order as the query complexity of the Grover search. k -server Problem on Treeswith Preprocessing We first describe Chrobak-Larmore’s k -competitive algorithm for k -server prob-lem on trees from [12]. Assume that we have a query on a vertex q , and theservers are on the vertices v , . . . , v k . We say that a server i is active if there areno other servers on the path from v i to q . In each phase, we move every active server one step towards the vertex q . After each phase, the set of active serverscan be changed. We repeat this phase (moving of the active servers) until one ofthe servers reaches the queried vertex q .The naive implementation of this algorithm has time complexity O ( n ) foreach query. First, we run a depth-first search with time labels [13], whose resultallows us to check in constant time whether a vertex u is an ancestor of a vertex v . After that, we can move each active server towards the queried vertex, stepby step. Together all active servers cannot visit more than O ( n ) vertices.In the following, we present an effective implementation of Chrobak-Larmore’salgorithm with preprocessing. The preprocessing part is done once and has O ( n log n ) time complexity (Theorem 1). The query processing part is done foreach query and has O (cid:0) k + k · log n (cid:1) time complexity (Theorem 2). We do the following steps for the preprocessing: – We do required preprocessing for LCA algorithm that is discussed in Section2.2. – We do required preprocessing for Binary lifting technique that is discussedin Section 2.2. – Additionally, for each vertex v we compute the distance from the root to v ,ie. dist(1 , v ). This can be done using a depth-first search algorithm [13].The algorithm for the preprocessing is the following (Algorithm 2). Theorem 1.
Algorithm 2 for the preprocessing has time complexity O ( n log n ) .Proof. The time complexity of the preprocessing phase is O ( n ) for LCA, O ( n log n )for the binary lifting technique and O ( n ) for ComputeDistance (1). Therefore,the total time complexity is O ( n log n ).5 lgorithm 1 ComputeDistance ( u ). Recursively compute the distance fromthe root. for v ∈ Children ( u ) do dist(1 , v ) ← dist(1 , u ) + 1 ComputeDistance ( v ) end for Algorithm 2
Preprocessing . The preprocessing procedure.
LCA Preprocessing()BL Preprocessing() dist(1 , ← ComputeDistance(1)
Assume that we have a query on a vertex q , and the servers are on the vertices v , . . . , v k . We do the following steps, implemented in Algorithms 3 and 5. Step 1.
We sort all the servers by their distance to the node q . The distancedist( v, q ) between a node v and the node q can be computed in the following way.Let l = LCA ( v, q ) be the lowest common ancestor of v and q , then dist( v, q ) =dist(1 , q ) + dist(1 , v ) − · dist(1 , l ). Using the prepocessing, this quantity canbe computed in constant time. We denote by Sort ( q, v , . . . , v k ) this sortingprocedure. In the following steps we assume that dist( v i , q ) ≤ dist( v i +1 , q ) for i ∈ { , . . . , k − } . Step 2.
The first server on v processes the query. We move it to the node q . Step 3.
For i ∈ { , . . . k } we consider the server on v i . It will be inactivewhen some other server with a smaller index arrives on the path between v i and q . Section 3.3 contains the different cases that can happen and how to computethe distance d traveled by v i before it becomes inactive. We then move the i -thserver d steps towards the query q . The new position of the i -th server is a vertex v ′ i . Algorithm 3
Query ( q ). Query procedure. Sort ( q, v , . . . , v k ) v ′ ← q for i ∈ { , . . . , k } do d ← DistanceToInactive ( q, i ) ⊲ see Algorithm 4 v ′ i ← Move ( v i , d ) ⊲ see Algorithm 5 end for .3 Distance to inactive state When processing a query, all servers except one will eventually become inactive.The crucial part of the optimization is to compute when a server becomes inactivequickly. For the purpose of computing this time, we claim that we can pretendthat servers “never go inactive”. Formally, let q be a query, i be a server, and j another server with smaller index. We know that i will become inactive becauseit is not the closest to the target. However it is possible that this particularserver j is not the one that will render i inactive. Nevertheless, we can pretendthat j will never become inactive and compute the distance i will travel beforegoing inactive because of j , call this distance d qi,j (the index i is fixed in thisreasoning). We claim the following: Lemma 2.
For any query q and server i > ( i.e. a server that will become inac-tive), the distance D qi travelled by i before it becomes inactive is equal min j
DistanceToInactive ( q, i ). Compute the distance travelled be-fore going inactive. d ← ∞ for j ∈ { , . . . , i − } do t ← do case analysis as above d ← min( d, dist( t, v j )) end forreturn d Lemma 3.
The time complexity of
DistanceToInactive ( q, i ) is O ( k ) .Proof. Since a vertex u is ancestor of v if LCA( u, v ) = u , we can check thiscondition in O (1) due to results from Section 2.2. It follows that we can compute d qi,j for every i, j, q in O (1) and there are at most k other servers to consider. We now consider the following problem: given a server v and a distance d , how toefficiently compute the new position of the server after moving it d steps towards q . We use the binary lifting technique for this procedure.Let l = LCA ( v, q ). If dist( l, v ) ≥ z , then the result node is on the pathbetween v and l . We can thus invoke MoveUp ( v, z ) from Section 2.2. Otherwise,we should move the server first to l . We then move it z − dist( l, v ) steps downtowards q from l . Moving down from l is the same as moving up dist( l, q ) − ( z − dist( l, v )) steps from q . The algorithm is presented in Algorithm 5. Lemma 4.
The time complexity of the algorithm
Move is O (log n ) .Proof. The time complexity of
MoveUp is O (log n ) using the binary liftingtechnique from Section 2.2 and LCA is in O (1) by Section 2.2. Furthermore,we can compute the distance between any two nodes in O (1) thanks to thepreprocessing. Therefore, the total complexity is O (log n ).8 tv i v j case qv i v j case v i tq v j case v i t qv j case v i t qv j case v i t v j q case List of the various cases to consider when computing the distance before aserver i is rendered inactive by a (closer to the query) server j . Algorithm 5
Move ( v, z ). Moves of a server from v to distance g on a pathfrom v to q . l = LCA ( v, q ) if z ≤ dist( l, v ) then Result ← MoveUp ( v, z ) end ifif z > dist( l, v ) then z ← z − dist( l, v ) Result ← MoveUp ( q, dist( l, q ) − z ) end ifreturn Result
The time complexity of the query processing phase is O (cid:0) k + k log n (cid:1) .Proof. The complexity of sorting the servers by distance is O ( k log k ). For eachserver, we compute the distance traveled before being inactive in O (1) by Lemma 3.We then move each server by that distance in time O (log n ) by Lemma 4. There-fore, the complexity of processing one server is O ( k + log n ), and there are k servers. Consider a search space S = { , . . . , m } and a subset M ⊆ S of marked elements.Define the indicator function g M : S → { , } by g M ( ℓ, r ) = 1 if { ℓ, . . . , r } ∩ M = ∅ , and 0 otherwise .
9n other words, g M ( ℓ, r ) indicates whether there is a marked element from M in the interval [ ℓ, r ]. Now assume that we do not know M but have access to atwo-sided probabilistic approximation ˜ g of g M . Formally, there is a probability p > / ℓ, r ∈ S ,˜ g ( ℓ, r ) = ( g M ( ℓ, r ) with probability at least p − g M ( ℓ, r ) otherwise . Intuitively, ˜ g behaves like g M with probability at least p . However, sometimes itmakes mistakes and returns a completely wrong answer. Note that ˜ g has two-sided error: it can return 0 even if the interval [ ℓ, r ] contains a marked element,but more importantly, it can also return 1 even though the interval does not contain any marked element. We further assume that a call to ˜ g ( ℓ, r ) takes time T ( r − ℓ ) where T is some nondecreasing function. Typically, we assume that T ( n ) = o ( n ), i.e. T is strictly better than a linear search.We now consider the problem of finding the first marked element in S , withprobability at least, say, 1 /
2. A trivial algorithm is to perform a linear search in O ( n ) until ˜ g returns 1. If ˜ g had no errors, we could perform a binary search in T ( m ). This does not work very well in the presence of errors because decisionsmade are irreversible, and errors accumulate quickly. Our observation is that ifwe modify the binary search to boost the success probability of certain calls to˜ g , we can still solve the problem in time in O ( T ( m )). The idea is inspired by [4]. For reasons that become clear in the proof, we needto boost some calls’ success probability. We do so by repeating them severaltimes and taking the majority: by this we mean that we take the most commonanswer, and return an error in the case of a tie.
Algorithm 6
Binary search for a function with two-sided errors ℓ ← , r ← m + 1 ⊲ search interval d ← ⊲ depth of the search while ℓ < r do mid ← ⌊ ( ℓ + r ) / ⌋ v l ← ˜ g ( ℓ, mid ) ⊲ repeat d times and take the majority if v l = 0 then ℓ ← mid + 1 else r ← mid end if d ← d + 1 end while roposition 1. Assume that T satisfies T ( n/k ) = O ( T ( n ) /k α ) for some α > and every n and k , then with probability more than . , Algorithm 6 returns theposition of the first marked element, or m + 1 if none exists. The running timeis O ( T ( m )) .Remark 1. The condition T ( n/k ) = O ( T ( n ) /k α ) for some α > n and k is clearly satisfied by any function of the form T ( n ) = n α log β n log γ log n . Proof.
The correctness of the algorithm, when there are no errors, is clear. Weneed to argue about the complexity and error probability.At the u th iteration of the loop, the algorithm considers a segment [ ℓ, r ] oflength at most m · − ( u − . The complexity of ˜ g ( ℓ, mid ) is at most O ( T ( r − ℓ ) α ) = O (cid:0) T ( m · − u − ) (cid:1) but we repeat it 2 u times, so the total complexity of the u th iteration is O (cid:0) uT (cid:0) m · − ( u − (cid:1)(cid:1) . The number of iterations is at most log m .Hence the total complexity is O log m X u =1 T (cid:16) m · − ( u − u (cid:17) = O log m X u =1 T ( m ) 2 − α ( u − u = O T ( m ) log m X u =1 − αu u = O T ( m ) ∞ X u =1 − αu u ! = O (cid:18) T ( m ) 2 α (2 α − (cid:19) = O ( T ( m )) . Finally, we need to analyze the success probability of the algorithm: at the u th iteration, the algorithm will run each test 2 u times and each test has aconstant probability of failure p . Hence for the algorithm to fail at iteration u ,at least half of the 2 u runs must fail: this happens with probability at most (cid:0) uu (cid:1) p u (cid:0) ueu (cid:1) u p u (2 ep ) u , where e = exp(1). Hence the probability that thealgorithm fails is bounded by log m X u =1 (2 ep ) u ∞ X u =1 (2 ep ) u ep − ep. By taking p small enough (say 2 ep < ), which is always possible by repeatingthe calls to ˜ g a constant number of times to boost the probability, we can ensurethat the algorithm fails less than half of the time. A particularly useful application of the previous section is for quantum search,particularly when ˜ g is a Grover-like search. Indeed, Grover’s search can decidein time O ( √ m ) if a marked element exists in an array of size m , with a constantprobability of error. 11ore precisely, assume that we have a function f : { , . . . m } → { , } andthe task is to find the minimal x ∈ { , . . . , m } such that f ( x ) = 1. If we let˜ g ( ℓ, r ) = GROVER ( ℓ, r, f ) then ˜ g has complexity T ( n ) = √ n and fails withconstant probability. Hence we can apply Proposition 1 and obtain an algorithmto find the first marked element with complexity T ( m ) = O ( √ m ) and constantprobability of error. In fact, note that we are not making use Proposition 1 to itsfull strength because ˜ g really has one-sided error: it will never return 1 if thereare no marked element. We will make use of this observation later. We note thatcontrary to some existing results (e.g. [24, Theorem 10]), our algorithm alwaysruns in time O ( √ m ), and not in expected time O ( √ m ). Proposition 2.
There exists a quantum algorithm that finds the first markedelement in an array of size m in time O ( √ m ) and error probability less than . . Note that O ( √ m ) is a worst-case time bound, not an average one. As observed above, we are not really using Proposition 1 to its full strengthbecause Grover’s search has one-sided error. This suggests that there is room forimprovement. Suppose that we now only have access to a two-sided probabilisticapproximation ˜ f of f . In other words, f can now make mistakes: it can return 1for an unmarked element or 0 for a marked element with some small probability.Formally, ˜ f ( x ) = ( f ( x ) with probability at least p − f ( x ) otherwisefor some probability p > /
2. We cannot apply Grover’s search directly in thiscase but some variants have been developed that can handle bounded errors [18].Using this result, we can build a two-sided error function ˜ g with high probabilityof success and time complexity O ( √ m ). Applying Proposition 1 again, we obtainthe following improvement: Proposition 3.
There exists a quantum algorithm
FindFirst that finds thefirst marked element in a array of size m in time O ( √ m ) and error probabilityless than . ; even when the oracle access to the array has a two-sided error.Note that O ( √ m ) is a worst-case time bound, not an average one. In practice, however, especially in quantum computing, f rarely has two-sided errors. For instance, Grover’s search has a one-sided error only. If we as-sume that ˜ f has one-sided error only, we can obtain a slightly better version ofProposition 3. Formally, we assume that˜ f ( x ) = ( f ( x ) with probability at least p . For space reasons, we defer the proof to Appendix B. It is known that Grover’s search does not behave well in the presence of two-sidederrors. roposition 4 (Appendix B). There exists a quantum algorithm that findsthe first marked element in a array of size m in expected time O ( √ x ) and witherror probability less than . , where x is the position of the first marked element,or O ( √ m ) if none is marked. Furthermore, it works even when the oracle accessto the array has one-sided error. Additionally, it has a worst-case complexity of O ( √ m ) in all cases. k -server Problem on Trees We consider a special way of storing a rooted tree. Assume that for each vertex v we have access to a sequence a v = ( a v , . . . , a vd ) for d = dist(1 , v ). Here a v is apath from the root (the vertex 1) to the vertex v , a v = 1 , a vd = v . Such a wayof describing a tree is not uncommon, for example when the tree represents afile system. A file path “ c:/Users/MyUser/Documents/newdoc.txt ” is exactlysuch a path in the file system tree. Here “ c ”, “ Users ”, “
MyUser ”, “
Documents ”are ancestors of “ newdoc.txt ”, “ c ” is the root and “ newdoc.txt ” is the nodeitself. Another example of a similar representation is the embedding of a binarytree in a array, where a node with index i has two children with indices 2 i and2 i + 1; and the parent node has index ⌊ i/ ⌋ . Here a path is encoded by index i which is really just a list of bits.We assume that we have access to the following two oracles in O (1): – given a vertex u , a (classical) oracle that returns the length of the stringpath a u ; – given a vertex u and an index i , a quantum oracle that returns the i th vertex a ui of the sequence a u .We can solve the k -server problem on trees using the same algorithm as in Section3 with the following modifications: – The function LCA( u, v ) becomes
LCP ( a u , a v ) where LCP ( a u , a v ) is a longestcommon prefix of two sequences a u and a v . – MoveUp ( v, z ) is the vertex a uz where a u is the sequence for u ; – We can compute dist( u, v ) if u is the ancestor of v : it is d ′ − d ′′ , where d ′ is a length of a u and d ′′ is a length of a v . Note that the invocations of distin Algorithms 5, 3 are always this form. The only exception is dist( u, v ) in Sort in which the function uses
LCA as a subroutine. The complexity of
Sort is thus the same as the complexity of
LCA or LCP in our case.By doing so, we do not need any preprocessing. We now replace the
LCP ( a u , a v )function by a quantum subroutine QLCP ( a u , a v ), presented in Section 5.1, andkeep everything else as is. This subroutine runs in time O ( √ n log n ) with O (cid:0) n (cid:1) error probability. This allows us to obtain the following result. Theorem 3.
There is a quantum algorithm for processing a query in time O (cid:0) k √ n log n (cid:1) and with probability of error O (cid:0) n (cid:1) . This algorithm does not require any prepro-cessing. roof. The complexity
Move is the complexity of
LCA that is
QLCP in ourimplementation, plus the complexity of
MoveUp . The former has complexity is O ( √ n log n ) by Lemma 5, and the latter O (1) by the oracle. Therefore, the totalrunning time of Move is O ( √ n log n ).The complexity of Query is O ( k ) times the cost of LCA that is
QLCP inour implementation, and then a call to
Move . Additionally, the
Sort func-tion invokes
LCA to compute distances. Hence, the complexity of
Sort is O ( k log k · √ n · log n ), and the total complexity is O (cid:0) k √ n · log n (cid:1) .We invoke, QLCP at most 4 k times so the success probability is at least (cid:0) − n (cid:1) k ≥ (cid:0) − n (cid:1) n = Ω (cid:0) − n (cid:1) . Therefore, the error probability is O (cid:0) n (cid:1) . Note that we do not need any preprocessing. Let us consider the Longest Common Prefix (LCP) problem. Given two se-quences ( q , . . . , q d ) and ( b , . . . , b s ), the problem is to find t such that q = b , . . . , q t = b t and q i = b i for t + 1 ≤ i ≤ m , where m = min ( d, s ).Let us consider a function f : { , . . . , m } → { , } such that f ( i ) = 1 iff q i = b i . Assume that x is the minimal argument such that f ( x ) = 1, then t = x −
1. The LCP problem is thus equivalent to the problem of finding thefirst marked element from Section 4.2. Therefore, the algorithm for LCP is thefollowing.
Algorithm 7
QLCP ( q, b ). Quantum algorithm for the longest common prefix. m ← min( d, s ) x ← FindFirst ( m, f ) ⊲ Repeat 3 log m times and take the majority vote if x = NULL then x ← m + 1 end ifreturn x − Lemma 5.
Algorithm 7 finds the LCP of two sequences of length m in time O ( √ m log m ) and with probability of error O (cid:0) m (cid:1) .Proof. The correctness of the algorithm follows from the definition of f . Thecomplexity of FindFirst is O ( √ m ) by Proposition 2. The total running time is O ( √ m log m ) because of the repetitions. References
1. F. Ablayev, M. Ablayev, J. Z. Huang, K. Khadiev, N. Salikhova, and D. Wu. Onquantum methods for machine learning problems part i: Quantum tools.
Big DataMining and Analytics , 3(1):41–55, 2019. . F. Ablayev, M. Ablayev, K. Khadiev, and A. Vasiliev. Classical and quantumcomputations with restricted memory. LNCS , 11011:129–155, 2018.3. A. Ambainis. Understanding quantum algorithms via query complexity. arXiv:1712.06349 , 2017.4. Andris Ambainis, Kaspars Balodis, J¯anis Iraids, Kamil Khadiev, VladislavsKl¸evickis, Kriˇsj¯anis Pr¯usis, Yixin Shen, Juris Smotrovs, and Jevg¯enijs Vihrovs.Quantum lower and upper bounds for 2d-grid and dyck language. arXiv preprintarXiv:2007.03402 , 2020.5. Ganesh R Baliga and Anil M Shende. On space bounded server algorithms. In
Pro-ceedings of ICCI’93: 5th International Conference on Computing and Information ,pages 77–81. IEEE, 1993.6. L. Becchetti and E. Koutsoupias. Competitive analysis of aggregate max in win-dowed streaming. In
ICALP , volume 5555 of
LNCS , pages 156–170, 2009.7. Michael A Bender and Martin Farach-Colton. The lca problem revisited. In
LatinAmerican Symposium on Theoretical Informatics , pages 88–94. Springer, 2000.8. Michael A Bender and Martın Farach-Colton. The level ancestor problem simpli-fied.
Theoretical Computer Science , 321(1):5–12, 2004.9. Omer Berkman and Uzi Vishkin. Recursive star-tree parallel data structure.
SIAMJournal on Computing , 22(2):221–242, 1993.10. J. Boyar, K. S. Larsen, and A. Maiti. The frequent items problem in online stream-ing under various performance measures.
International Journal of Foundations ofComputer Science , 26(4):413–439, 2015.11. Michel Boyer, Gilles Brassard, Peter Høyer, and Alain Tapp. Tight bounds onquantum searching.
Fortschritte der Physik , 46(4-5):493–505, 1998.12. Marek Chrobak and Lawrence L Larmore. An optimal on-line algorithm for kservers on trees.
SIAM Journal on Computing , 20(1):144–148, 1991.13. T. H Cormen, C. E Leiserson, R. L Rivest, and C. Stein.
Introduction to Algorithms .McGraw-Hill, 2001.14. C. D¨urr and P. Høyer. A quantum algorithm for finding the minimum. arXiv:quant-ph/9607014 , 1996.15. Michele Flammini, Alfredo Navarra, and Gaia Nicosia. Efficient offline algorithmsfor the bicriteria k-server problem and online applications.
Journal of DiscreteAlgorithms , 4(3):414–432, 2006.16. Y. Giannakopoulos and E. Koutsoupias. Competitive analysis of maintaining fre-quent items of a stream.
Theoretical Computer Science , 562:23–32, 2015.17. Lov K Grover. A fast quantum mechanical algorithm for database search. In
Proceedings of the twenty-eighth annual ACM symposium on Theory of computing ,pages 212–219. ACM, 1996.18. Peter Høyer, Michele Mosca, and Ronald de Wolf. Quantum search on bounded-error inputs. In Jos C. M. Baeten, Jan Karel Lenstra, Joachim Parrow, and Ger-hard J. Woeginger, editors,
Automata, Languages and Programming , pages 291–299, Berlin, Heidelberg, 2003. Springer Berlin Heidelberg.19. Stephen Hughes. A new bound for space bounded server algorithms. In
Proceedingsof the 33rd annual on Southeast regional conference , pages 165–169, 1995.20. A. R Karlin, M. S Manasse, L. Rudolph, and D. D Sleator. Competitive snoopycaching. In
FOCS, 1986., 27th Annual Symposium on , pages 244–254. IEEE, 1986.21. K. Khadiev, A. Khadieva, and I. Mannapov. Quantum online algorithms withrespect to space and advice complexity.
Lobachevskii Journal of Mathematics ,39(9):1210–1220, 2018.22. Kamil Khadiev and Maxim Yagafarov. A fast algorithm for online k-servers prob-lem on trees. arXiv preprint arXiv:2006.00605 , 2020.
3. Dennis Komm.
An Introduction to Online Computation: Determinism, Random-ization, Advice . Springer, 2016.24. C. Y.-Y. Lin and H.-H. Lin. Upper bounds on quantum query complexity in-spired by the elitzur-vaidman bomb tester. In . Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2015.25. M. A Nielsen and I. L Chuang.
Quantum computation and quantum information .Cambridge univ. press, 2010.26. Tomislav Rudec, Alfonzo Baumgartner, and Robert Manger. A fast work functionalgorithm for solving the k-server problem.
Central European Journal of OperationsResearch , 21(1):187–205, 2013.27. Daniel D Sleator and Robert E Tarjan. Amortized efficiency of list update andpaging rules.
Communications of the ACM , 28(2):202–208, 1985.
A Implementation of Binary Lifting
The
BL Preprocessing () prepares an array up that stores data for MoveUp subroutine. For a vertex v and an integer 0 ≤ w ≤ ⌊ log n ⌋ , the cell up [ v ][ w ]stores a vertex v ′ on the path from v to the root and at distance dist( v, v ′ ) = 2 w .We construct the array using dynamic programming and obtain the followingformulas: up [ v ][ w ] ← up [ up [ v ][ w − w − , up [ v ][0] ← Parent ( v )Let us show that the formulas are correct. Let v ′ = up [ v ][ w ], v ′′ = up [ v ][ w − v ′ , v ) = dist( v ′′ , v ) + dist( v ′′ , v ′ ) = 2 w − + 2 w − = 2 w .The algorithm is presented in Algorithm 8 Algorithm 8
BL Preprocessing () prepares the required data structures forBinary Lifting technique. for v ∈ V do up [ v ][0] ← Parent ( v ) end forfor w ∈ { , . . . ⌊ log n ⌋} dofor v ∈ V do v ′′ ← up [ v ][ w − if v ′′ = NULL then up [ v ][ w ] ← NULL end ifif v ′′ = NULL then up [ v ][ w ] ← up [ v ′′ ][ w − end ifend forend for The subroutine
MoveUp ( v, z ) returns a vertex v ′ on the path from v to theroot and at distance dist ( v ′ , v ) = z . First, we find the maximal w ′ such that16 w ′ < z . Then, we move to the vertex up [ v ][ w ′ ] and reduce z by 2 w ′ . We repeatthis action until z = 0. The total number of steps is at most O ( w ′ ) = O (log n ). Algorithm 9
MoveUp ( v, z ) returns the vertex v ′ at distance z on the path tothe root. w ← power ← while w ≤ z do w ← w + 1 power ← power · end whilewhile z = 0 do v ′′ ← v while w > z do w ← w − power ← power / end while v ← up [ v ][ w ] z ← z − dist ( v ′′ , v ) end whilereturn v B A quantum algorithm for finding the first markedvertex
In this section, let
FindFirst denote the algorithm from Proposition 3 and
GroverTwoSided denote the variant of Grover’s algorithm of [18] that workswith two-sided error oracles. Recall that we assume that ˜ f has one-sided error, i.e. it may return 0 instead of 1 with small probability but not the other wayaround. Consider the following algorithm: Algorithm 10
FindFirstAdvanced ( m, f ). Find the first marked element inan array. r ← ⊲ size of the search space while r ≤ n and GroverTwoSided (1 , r, ˜ f ) = 0 do r ← min( m, r ) end whilereturn FindFirst ( r, ˜ f ) We now show that this algorithm satisfies the requirements of Proposition 4.To simplify the proof, we assume that the array always contains a marked ele-ment; this is without loss of generality because we can add an extra object at17he end that is always marked. Furthermore, we assume that n is a power a 2,this is again without loss of generality because we can add dummy object at theend at the cost of doubling the array size at most.Recall that ˜ f has a one-sided error, and the same applies to GroverTwoSided in this case. Therefore the test