Locality in Online Algorithms
Maciej Pacut, Mahmoud Parham, Joel Rybicki, Stefan Schmid, Jukka Suomela, Aleksandr Tereshchenko
LLocality in Online Algorithms
Maciej Pacut ! Faculty of Computer Science, University of Vienna, Austria
Mahmoud Parham ! Faculty of Computer Science, University of Vienna, Austria
Joel Rybicki ! IST Austria, Austria
Stefan Schmid ! Faculty of Computer Science, University of Vienna, Austria
Jukka Suomela ! Aalto University, Finland
Aleksandr Tereshchenko ! Aalto University, Finland
Abstract
Online algorithms make decisions based on past inputs. In general, the decision may depend onthe entire history of inputs. If many computers run the same online algorithm with the same inputstream but are started at different times, they do not necessarily make consistent decisions.In this work we introduce time-local online algorithms . These are online algorithms where theoutput at a given time only depends on T = O (1) latest inputs. The use of (deterministic) time-localalgorithms in a distributed setting automatically leads to globally consistent decisions.Our key observation is that time-local online algorithms (in which the output at a given timeonly depends on local inputs in the temporal dimension) are closely connected to local distributedgraph algorithms (in which the output of a given node only depends on local inputs in the spatialdimension). This makes it possible to interpret prior work on distributed graph algorithms from theperspective of online algorithms.We describe an algorithm synthesis method that one can use to design optimal time-local onlinealgorithms for small values of T . We demonstrate the power of the technique in the context of avariant of the online file migration problem , and show that e.g. for two nodes and unit migrationcosts there exists a 3-competitive time-local algorithm with horizon T = 4, while no deterministiconline algorithm (in the classic sense) can do better. We also derive upper and lower bounds for amore general version of the problem; we show that there is a 6-competitive deterministic time-localalgorithm and a 2 . α ≥ Theory of computation → Online algorithms; Theory of compu-tation → Distributed computing models
Keywords and phrases
Online algorithms, distributed graph algorithms, locality
Acknowledgements
This research has received funding from European Union’s Horizon 2020research and innovation programme, under the European Research Council (ERC) grant agreementNo. 864228 and the Marie Skłodowska-Curie grant agreement No. 840605.
Online algorithms [13] make decisions based on past inputs, with the goal of being competitiveagainst an algorithm that sees also future inputs. In this work, we introduce time-local onlinealgorithms ; these are online algorithms in which the output at any given time is a function ofonly T latest inputs (instead of the full history of past inputs).Our main observation is that time-local online algorithms are closely connected to localdistributed graph algorithms : distributed algorithms make decisions based on the local a r X i v : . [ c s . D S ] F e b Locality in Online Algorithms x x x x x x x x x x y y y y y y y y y y inputoutput Offline algorithmsCentralized algorithmsOnline algorithms
Time-localonline algorithmsLocal distributedalgorithms time: day i = 1, 2, 3, …space: node i = 1, 2, 3, … Figure 1
Local decision-making in time vs. space dimensions. information in the spatial dimension , while time-local online algorithms make decisionsbased on the local information in the temporal dimension ; see Figure 1. We formalize thisconnection, and show how we can directly use the tools developed to study distributedapproximability of graph optimization problems to prove upper and lower bounds on thecompetitive ratio achieved with time-local online algorithms. Moreover, we show how to use computational techniques to synthesize optimal time-local algorithms.
Time-local online algorithms have some attractive properties. Let us give two examples.
Fault-Tolerant Distributed Decision.
Consider a setting in which many geographicallydistributed computers need to make consistent decisions. All computers can observe thesame input stream, and each day each of them has to announce its own decision.If all computers are started at the same time, we can take any deterministic onlinealgorithm and let each computer run its own copy of the algorithm. However, this approachdoes not tolerate failures : if a computer crashes and is restarted, the local state of thealgorithm is lost, and as the decisions may depend in general on the full history of inputs, itwill no longer make consistent decisions with the others.One solution is to run a consensus protocol [38], with which the computers can ensurethat they all agree on the same decision (or on the same local state). However, this introducesoverhead, and is only applicable if we can assume that not too many computers are faulty;for example, with Byzantine failures consensus is possible only if fewer than 1 / . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 3 self-stabilizing [24, 25]: in T steps since the latest failure, all computers will deterministicallymake consistent decisions, without any communication. Random Access to the Decision History.
A second benefit of time-local online algorithmsis that they make it possible to efficiently access any past decision, with zero additionalstorage beyond the storage of the input stream. To look up a decision at any given time i , itis enough to look up the inputs at the T previous time points and apply the deterministictime-local algorithm. With classic online algorithms one would have to either store thedecision, store the local state, or re-run the entire algorithm up to point i . Online Problems as Request-Answer Games.
Online problems are often formalized asrequest-answer games [13, Ch. 7] that consist of an input set X (requests), an output set Y (answers), and an infinite sequence ( f n ) n ≥ of cost functions f n : X n × Y n → R ∪ {∞} for each n ∈ N . The optimal offline cost of an input sequence x ∈ X n is opt ( x ) = min { f n ( x , y ) : y ∈ Y n } . Classic Online Algorithms.
An online algorithm A in the classic sense, i.e., an algorithm thathas access to all past inputs, can be defined as a sequence ( A i ) i ≥ of functions A i : X i − → Y .The output y = A ( x ) of the algorithm on input x ∈ X n is given by y i = A i ( x , . . . , x i − ) for each 1 ≤ i ≤ n. The quality of an online algorithm is measured by comparing the cost of its output againstthe optimal offline cost. An algorithm is said to be c -competitive (have a competitive ratio c )if on any input sequence x ∈ X n its output y = A ( x ) satisfies f n ( x , y ) ≤ c · opt ( x ) + d for afixed constant d . We say that an algorithm is strictly c -competitive if additionally d = 0. Time-Local Algorithms.
We now start with a simplified definition of (deterministic) time-local algorithms; the general definitions are given later. Let T ∈ N be a constant. A time-localalgorithm that has access to T latest inputs is given by a sequence of maps ( A i ) i ≥ of theform A i : ( X ∪ {⊥} ) T → Y , where ⊥ / ∈ X . The output of the algorithm is given by y i = A i ( x i − T , . . . , x i − ) for each 1 ≤ i ≤ n, where we let x i = ⊥ be placeholder values for i <
1. We say that the algorithm is unclocked ifall maps A i are equal and otherwise it is clocked . That is, in the latter case the i th decision y i made by the algorithm may depend on the current time step i . On the other hand, unclockedalgorithms are given by a single map A : X T → Y and the output does not depend on thenumber of steps taken so far on long inputs. The Online File Migration.
As our running example, we consider a simple variant of the online file migration problem : we are given a network, modeled as an undirected graph withtwo nodes, and an indivisible shared resource, a file , initially stored at one of the nodes.Requests to access the file arrive from nodes of the network over time, and the serving costof a request is 0 if the file is collocated with the request, and 1 otherwise. After servinga request, we may decide to migrate the file to a different node of the network, paying α Locality in Online Algorithms units of migration cost for some parameter α ≥ α ≥ α ≥
1, there exists a3-competitive deterministic algorithm for the problem [14], and no deterministic algorithmcan do better [11]. However, randomized algorithms can beat this bound: for α ≥
1, thereexists a (1 + ϕ )-competitive randomized algorithm against the oblivious adversary [44], where ϕ ≈ .
62 is the golden ratio.
In this work, we initiate the study of temporal locality in online algorithms. We give aseries of results and techniques that illustrate different aspects of time-local algorithms. Alltechnical details and further results are provided in the subsequent sections.
Introducing Time-Local Online Algorithms (Section 3).
We formalize the new notionof temporal locality in online algorithms. We investigate the power of time-local onlinealgorithms by focusing on two basic models, unclocked and clocked. We formally introducethe deterministic variants of these models in Section 3, and randomized variants of themodels in Section 6. We show that clocked algorithms are often strictly more powerful thanunclocked algorithms, but more surprisingly, unclocked time-local algorithms can sometimesbe as competitive as classic (non-local) online algorithms.
Transferring Results from Distributed Computing (Section 4).
We identify connectionsbetween different models studied in distributed graph algorithms and different variants of time-local online algorithms . We exploit this connection, and show how to lift many results fromtheory of distributed computing to establish impossibility results for time-local algorithmsessentially for free. For example, it turns out that if an online problem has a component thatis equivalent to a symmetry-breaking task or to a nontrivial distributed optimization problem,such as in online load balancing (Example 3.2), it follows that deterministic unclockedtime-local algorithms cannot perform well, but randomization helps.On the other hand, this suggests that problems where symmetry-breaking is not acritical component, such as in the online file migration problem (Example 3.1), deterministicunclocked time-local algorithms can perform well. This heuristic argument is corroborated byour case study on online file migration: the problem admits competitive time-local algorithms.
The Power of Clocked Algorithms (Section 5).
Second, we establish that deterministic clocked time-local algorithms are powerful. For the family of bounded monotone minimizationgames , we can turn any classic deterministic online algorithm into a deterministic clockedtime-local algorithm with only a small increase in the competitive ratio. ▶ Theorem 1.1.
Let F be a bounded monotone minimization game. If there exists an onlinealgorithm A with competitive ratio c for F , then for any constant ε > there exists someconstant T and a clocked T -time-local algorithm B with competitive ratio (1 + ε ) c for F . We also show that there are problems outside this family that do not admit competitiveclocked algorithms: for the classic online caching problem [42], no deterministic clockedtime-local algorithm can achieve a finite competitive ratio, but competitive classic onlinealgorithms exist. . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 5
Randomization in Time-Local Algorithms (Section 6).
In the classic online setting, ran-domized algorithms have two equivalent characterizations: at the start, randomly sample a deterministic algorithm from the set of all algorithms, or in each step, make a random decision based on e.g. a sequence of random coin flips.The former corresponds to mixed strategies , where we sample all random bits used bythe algorithm before seeing any of the input, whereas the latter corresponds to behavioralstrategies , where the algorithm generates random bits along the way as it needs them. Weobserve that these two types of randomized algorithms differ for time-local algorithms, weillustrate differences between them by showing both types of algorithms, and give a lowerbound that holds for both types (see below). Automated Synthesis of Time-Local Algorithms (Section 7).
Again by leveraging theconnection to local graph algorithms, we describe and implement a novel algorithm synthesismethod that allows us to automate the design of optimal unclocked time-local algorithms(see Section 7). Specifically, the synthesis task can be formulated as a certain weightedoptimization problem in dual de Bruijn graphs.We formalize and implement a technique for the automated design of time-local algo-rithms for local optimization problems (defined in this work). This technique allows us toautomatically obtain tight upper and lower bounds for unclocked time-local algorithms. First,we prove the following result. ▶ Theorem 1.2 (informal) . Let Π be a local optimization problem and A an unclocked time-local algorithm with horizon T . Then there is a finite, dual-weighted graph G = G (Π , A ) suchthat the competitive ratio of A is determined by the cycle with the heaviest weight ratio in G . Recall that each unclocked time-local algorithm with horizon T is given by some map A : X T → Y . For local optimization problems with finite input set X and output set Y , wecan iterate through all of the | Y | | X | T maps to find an optimal algorithm for any given T .We illustrate the usefulness of this technique by synthesizing several optimal deterministictime-local algorithms for the online file migration problem. For example, we show that forunit costs ( α = 1) there exists a 3-competitive time-local algorithm with T = 4, which is thebest competitive ratio achieved by any deterministic online algorithm [11]. Moreover, wealso synthesize randomized time-local algorithms with strictly better (expected) competitiveratio than the optimal deterministic algorithms under the oblivious adversary. See Figure 2for a summary. Online File Migration: A Case Study (Section 8).
Finally, we investigate analyticallyhow the length T of the visible input horizon influences the quality of solutions given bytime-local online algorithms. For the online file migration problem, we show the following: ▶ Theorem 1.3 (informal; see Theorem 8.1) . For any α ≥ T , there is no randomized clockedtime-local algorithm that achieves a competitive ratio better than α/T . ▶ Theorem 1.4 (informal; see Theorem 8.2) . For any α ≥ , there is a randomized clockedtime-local algorithm that is (1 + ϕ ) -competitive, where ϕ ≈ . is the golden ratio, for some T = Θ( α ) . In general, it achieves a competitive ratio of max { α/T, T + 1) / (2 α ) } . ▶ Theorem 1.5 (informal; see Corollary 8.13) . For any α ≥ , there is a deterministicunclocked time-local algorithm that is -competitive for T ≥ α . Moreover, for ≤ T < α the algorithm is (4 + 12 α/T ) -competitive. Locality in Online Algorithms C o m p e t i t i v e r a t i o deterministic, T = 1deterministic, T = 4deterministic, T 6deterministic, T = deterministic lower bound randomized, T = 2randomized, T = 3randomized, T 3.23randomized, T = randomized lower bound Figure 2
Upper and lower bounds for the online file migration problem. The visualization includesthe upper bounds from Table 2 for small values of T , as well as the upper bounds from Corollaries8.3 and 8.13, and the lower bound from Corollary 8.16. . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 7 The above deterministic algorithm uses a simple sliding window rule. We say that thelast T inputs contain a b -window for b ∈ { , } if there is a subsequence of requests in whichthe number of b -requests is at least twice the number of (1 − b )-requests. The algorithm is: Output b ∈ { , } if the most recent window in the last T inputs is a b -window. Otherwise, output 0.However, despite the deceptive simplicity of the algorithm, its analysis is surprisingly chal-lenging and illuminates aspects involved in the competitive analysis of time-local algorithms.The technical results in the case study illustrate how the style of arguments and techniquesused in the analysis of time-local online algorithms differs from the classic online algorithmssetting. The analytical and synthesized algorithms are summarized and compared againstclassic online algorithms in Figure 2.
Restricted Models of Online Algorithms.
To our best knowledge, temporal locality ofonline algorithms has not been systematically studied. However, other restricted formsof online algorithms have received some attention. For example, Chrobak [19] introducedthe notion of memoryless online algorithms: the answer to the current request can onlydepend on the current configuration instead of being an arbitrary function of the entire pasthistory as in general online algorithms. In particular, memoryless online algorithms can besynthesized using a fixed point approach. However, memoryless online algorithms differ fromtime-local algorithms, as memoryless algorithms have access to the previous configuration ofthe algorithm instead of last T inputs.Ben-David et al. [8] investigated local online problems within the request-answer gameframework of online algorithms. However, their notion of locality applies to the cost functions defining the online problem instead of algorithms solving them: the cost of a solution cannotdepend too much on past inputs. In this work, to avoid confusion, we call these games bounded monotone minimization games , and show that if a problem in this class admits acompetitive online algorithm, then any such algorithm can be converted into a competitiveclocked time-local algorithm. Moreover, we introduce an alternative characterization of localgames, local optimization games , which are different from bounded monotone minimizationgames, but they have a natural interpretation as distributed optimization problems on paths.We give a general synthesis method to automate the design of optimal time-local algorithmsfor local optimization problems. Synthesis of Online Algorithms.
As mentioned, already the early work on online algorithmsconsidered synthesis in the context of memoryless algorithms [19]. Computer-aided designtechniques have also been used to design optimal online algorithms for specific problems.Coppersmith et al. [22] studied the design and analysis of randomized online algorithmsfor k -server problems, metrical task systems and a class of graph games; they show thatalgorithm synthesis is equivalent the synthesis of random walks on graphs. For a variant ofthe online knapsack problem [31], Horiyama, Iwama and Kawahara [29] obtained an optimalalgorithm by using a problem-specific finite automaton and solving a set of inequalities foreach of its states. More recently, the synthesis of optimal algorithms for preemptive variantsof online scheduling [7] was reduced to a two-player graph game [18, 37]. Synthesis of Local Algorithms.
Synthesis of distributed graph algorithms has a longhistory, mostly focusing on so-called locally checkable labeling (LCL) problems in the
LOCAL
Locality in Online Algorithms model of distributed computing [4, 5, 15, 17, 28, 40]. In their foundational work, Naor andStockmeyer [36] showed that it is undecidable to determine whether an LCL problem admitsa local algorithm in general, but it is decidable for unlabeled directed paths and cycles.Balliu et al. [4] showed that determining the distributed round complexity of LCL problemson paths and cycles with inputs is decidable, but PSPACE-hard. Moreover, when restrictedto LCL problems with binary inputs, there is a simple synthesis procedure [5]. Recently,Chang et al. [17] showed that synthesis in unlabeled paths, cycles and rooted trees can bedone efficiently.Beyond decidability results, synthesis has also been applied in practice to obtain optimallocal algorithms. Rybicki and Suomela [40] showed how to synthesize optimal distributedcoloring algorithms on directed paths and cycles. Brandt et al. [15] gave a technique forsynthesizing efficient distributed algorithms in 2-dimensional toroidal grids, but showed thatin general determining the complexity of an LCL problem is undecidable in grids. Similarlyto our work, Hirvonen et al. [28] considered the synthesis of optimization problems . They gavea method for synthesizing randomized algorithms for the max cut problem in triangle-freeregular graphs.Our work identifies the connection between temporal locality online algorithms and spatiallocality in distributed algorithms. As we show in this work, time-local online algorithms canbe seen as local graph algorithms on directed paths. However, formally the computationalpower of the
LOCAL model resides between unclocked and clocked time-local models westudy in this work. Thus, decidability results and synthesis techniques do not directly carryover to the time-local online algorithms setting.
We now introduce the family of local online problems that we study in this work. Thedefinition is somewhat technical, but the basic idea is simple: at each time step i , the cost(or utility) of our decision y i is defined to be some function of the current input x i and up to r = O (1) previous inputs and outputs.This formalism has several attractive features. First, it is flexible enough to e.g. defineonline problems in which we reward correct decisions (e.g. whenever we predict correctly y i = x i , we get some profit), we penalize costly moves (e.g. whenever we change our mindand switch to a new output y i ̸ = y i − , we get some penalty), and we prevent invalid choices(e.g. by defining infinite penalties for decisions that are not compatible with the previousinputs and/or previous decisions). Second, this formalism can capture problems that arerelevant in distributed graph algorithms (e.g. x i represents the weight of node i along a path, y i indicates which nodes are selected, and we pay x i whenever we select a node). Finally,this family of problems is amenable to automated algorithm synthesis, as we will later see.We will now present the formal definition, and then give several examples of differentkinds of problems, both from the areas of online and distributed graph algorithms. Formalism. A local optimization problem is a tuple Π = ( X, Y, r, v, aggr , obj), where X is the set of inputs , Y is the set of outputs , r ∈ N is the horizon , v : X r +1 × Y r +1 → R ∪ {−∞ , + ∞} is the local cost function ,aggr ∈ { sum , min , max } is the aggregation function , . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 9 obj ∈ { min , max } is the objective .The input for the problem Π is a sequence x = ( x , x , . . . , x n ) ∈ X n and a solution isa sequence y = ( y , y , . . . , y n ) ∈ Y n . For convenience, we will use placeholder values x i = y i = ⊥ for i < i > n . With each index, we associate a value u i ( x , y ) defined as u i ( x , y ) = v ( x i − r , . . . , x i , y i − r , . . . , y i ) . Finally, we apply the aggregation function aggr to values u i to determine the value u ( x , y )of the solution. That is, if the aggregation function is sum, the cost function is given by f n ( x , y ) = n X i =1 u i ( x , y ) . For example, if the objective is min, the task in Π is to find a solution y that minimizes u ( x , y ) for a given input x , and so on. Note that X , Y and ( f n ) n ≥ define a request-answergame. Shorthand Notation.
In general, the local cost function v is a function with 2( r + 1)arguments. However, it is often more convenient to represent v as a function that takes onematrix with two rows and r + 1 columns and use “ · ” to denote irrelevant parameters, e.g. v (cid:0) · · cd e · (cid:1) = γ is equivalent to saying that v ( a, b, c, d, e, f ) = γ for all a, b ∈ X and f ∈ Y . Examples of Online Problems.
Let us first see how to encode typical online problems inour formalism. We start with a highly simplified version of the online file migration problem ,a.k.a. online page migration [11]. This will be the running example that we use throughoutthis work. ▶ Example 3.1 (online file migration) . We are given a network consisting of two nodes, andan indivisible shared resource, a file , initially stored at one of the nodes. Requests to accessthe file arrive from nodes of the network over time, and the serving cost of a request is thedistance from the requesting node to the file, i.e., 0 if the file is collocated with the request,and 1 otherwise. After serving a request, we may decide to migrate the file to a differentnode of the network, paying α units of migration cost for some parameter α ≥ x i ∈ X = { , } represents access to thefile at time i from the node x i of the network, and output y i ∈ Y = { , } represents thelocation of the file at time i . We choose the horizon r = 1, aggregation function “sum”, andobjective “min”, and define the local cost function as v (cid:0) ·
00 0 (cid:1) = 0 , v (cid:0) ·
01 1 (cid:1) = 1 , v (cid:0) ·
01 0 (cid:1) = α, v (cid:0) ·
00 1 (cid:1) = 1 + α,v (cid:0) ·
11 1 (cid:1) = 0 , v (cid:0) ·
10 0 (cid:1) = 1 , v (cid:0) ·
10 1 (cid:1) = α, v (cid:0) ·
11 0 (cid:1) = 1 + α. Recall that α > ▶ Example 3.2 (online load balancing) . Each day i a job arrives; the job has a duration x i ∈ X = { , , . . . , ℓ } . We need to choose a machine y i ∈ Y that will process the job. If,e.g., x i = 3, then machine y i will process job i during days i , i + 1, and i + 2. The load of amachine is the number of concurrent jobs that it is processing at a given day, and our task isto minimize the maximum load of any machine at any point of time.In this case we can choose the horizon r = ℓ −
1, aggregation function “max”, andobjective “min”, and define the local cost function as follows: v ( x i − r , . . . , x i , y i − r , . . . , y i ) = max y ∈ Y (cid:12)(cid:12)(cid:8) j ∈ X : x i − j +1 ≥ j and y i − j +1 = y (cid:9)(cid:12)(cid:12) . That is, we count the number of jobs that were assigned to each machine y ∈ Y on days i − r, . . . , i and that are long enough so that they are still being processed during day i . Forexample, if X = Y = { , } , this is equivalent to v (cid:0) ·· · (cid:1) = 1 , v (cid:0) · (cid:1) = 2 , v (cid:0) · (cid:1) = 2 , v (cid:0) · (cid:1) = 1 , v (cid:0) · (cid:1) = 1 . Encoding Online Problems: Answers vs. Configurations.
The same online problem canoften be formalized and encoded in several different ways. In general, the output of analgorithm in each step can naturally be defined either as the new configuration of the system (e.g. “machine 1 is executing two jobs with remainingdurations a and b , and machine 2 is executing one job with remaining duration c ”), or the action chosen (a transition from the previous configuration to the next one e.g.“enqueue the incoming job on machine 2”).In the classic online setting, where algorithms have access to the entire past input sequence,this distinction tends to matter little (actions can be computed from the full sequence ofconfigurations, and vice versa); one simply uses the most convenient formulation.However, in restricted models of computation, such as time-local online algorithms, thesituation is different. The choice of encoding—e.g. whether the algorithm outputs the actionsit makes or the current configuration—may dramatically impact the solvability of a problem.We discuss this question in more detail in Section 8.4. Unless otherwise specified, we assumethat algorithms for online problems output the current configuration in each step. Examples of Graph Problems on Paths.
In this paper, we uncover and exploit connectionsbetween time-local online algorithms and distributed graph algorithms on paths. We haveseen that the formalism that we use is expressive enough to capture typical online problems;we now express some classic graph optimization problems studied in distributed computing.Let us now see how to express some classic graph optimization problems that have beenstudied in the theory of distributed computing.We interpret each index i as a node in a path, where nodes i and i + 1 are connectedby an edge. Input x i is the weight of node i , and output y i encodes a subset of nodes S ⊆ { , , . . . , n } , with the interpretation that i ∈ S whenever y i = 1. Hence X = R ≥ and Y = { , } . ▶ Example 3.3 (maximum-weight independent set) . We can capture a problem equivalentto the classic maximum-weight independent set as follows: we choose the horizon r = 1,aggregation function “sum”, and objective “max”, and define the local cost function asfollows: v (cid:0) · ·· (cid:1) = 0 , v (cid:0) · α (cid:1) = α, v (cid:0) · · (cid:1) = −∞ . . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 11 That is, a node of weight α is worth α units if we select it. The last case ensures that thesolution represents a valid independent set (no two nodes selected next to each other). ▶ Example 3.4 (minimum-weight dominating set) . To represent minimum-weight dominatingsets, we choose r = 2, aggr = sum, and obj = min. We define the local cost function asfollows: v (cid:0) · α ·· · (cid:1) = α, v (cid:0) · · ·· (cid:1) = 0 , v (cid:0) · · · · (cid:1) = 0 , v (cid:0) · · · (cid:1) = + ∞ . Here if we select a node of cost α , we pay α units. Nodes that are not selected but that arecorrectly dominated by a neighbor are free. We ensure correct domination by assigning aninfinite cost to unhappy nodes.Technically, when we select a node i , we will pay for it at time i + 1, not at time i , butthis is fine, as we will in any case sum over all nodes (and ignore constantly many nodesnear the boundaries). So far we have introduced the formalism we use to define computational problems. Let usnow define the model of computing. For convenience, we will extend the definition of inputsto include a placeholder value ⊥ and let x i = ⊥ for i < i > n . The key models ofcomputing that we study are all captured by the following definition: ▶ Definition 3.5 (local algorithm) . An [ a, b ] -local algorithm is a sequence ( A i ) i ≥ of functionsof the form A i : X a +1+ b → Y . The output y of an algorithm A for input x ∈ X n , in notation y = A ( x ) , is defined as follows: y i = A i ( x i − a , . . . , x i + b ) for each i = 1 , . . . , n. If A i = A j for all i, j ∈ N , then the algorithm A is unclocked. Otherwise, the algorithm isclocked. Note that unclocked time-local algorithms as defined above are unaware of the currenttime step i ; they make the same deterministic decision every time for the same (local) inputpattern. We can quantify the cost of not being aware of the current time step, by comparingunclocked algorithms against the stronger model of clocked algorithms, which can makedifferent decisions based on the current time step i . Classic Models of Online and Distributed Algorithms.
Using the notion of unclocked time-local algorithms, we can characterize algorithms studied in prior work as follows; seealso Figure 1. In what follows, T is a constant independent of the length n of input: [ ∞ , ∞ ]-local: These are algorithms with access to the full input. In the context ofonline algorithms, these are usually known as offline algorithms , while in the context ofdistributed computing, these are usually known as centralized algorithms . [ ∞ , − These are online algorithms in the usual sense. The output for a timestep i is chosen based on inputs for all previous time steps up to the time step i −
1. Thisis an appropriate definition for the online file migration problem (Example 3.1): we needto decide where to move the file before we see the next request. [ ∞ , These are online algorithms with one unit of lookahead . The output fora time step i is chosen based on inputs up to the time step i . This is an appropriatedefinition for the online load balancing problem (Example 3.2): we can choose the machineonce we see the parameters of the new job. Table 1
Correspondence between time-local online algorithms and distributed graph algorithms.Time-local online algorithms Local distributed graph algorithmson directed pathsWeakest unclocked [
T, T ]-local T rounds in the PN model [1, 2, 45]N/A T rounds in the LOCAL model [33, 39]clocked [
T, T ]-local T rounds in the numbered LOCAL modelStrongest N/A T rounds in the supported LOCAL model [26, 41] [ T, T ]-local:
These can be interpreted as T -round distributed algorithms in directedpaths in the port-numbering model. In the port-numbering model, in T synchronouscommunication rounds, each node can gather full information about the inputs of all nodeswithin distance T from it, and nothing else. This is a setting in which it is interesting tostudy graph problems such as the maximum-weight independent set (Example 3.3) andthe minimum-weight dominating set (Example 3.4). New Models: Time-Local Online Algorithms.
Now we are ready to introduce the mainobjects of study for the present work: unclocked [ T, − These are time-local algorithms with horizon T , i.e., onlinealgorithms that make decisions based on only T latest inputs. unclocked [ T, These are time-local algorithms with one unit of lookahead . clocked [ T, T ]-local:
As we will see later, these algorithms are equivalent to T -rounddistributed algorithms a restricted variant of the supported LOCAL model [26, 41]. clocked [ T, − These are clocked time-local algorithms that make decisionsbased on only T latest inputs, but the decision may depend on the current time step i .We note that there is nothing fundamental about the constants − , y i only after seeinginputs up to i + 7. In this section, we discuss the connection between time-local online algorithms and localdistributed graph algorithms on paths. Although the former deal with locality in the temporal dimension and the latter in spatial dimension, we will see that these two worlds are closelyconnected. In particular, we show how to transfer results from distributed computing to thetime-local online setting.We focus on two standard models with very different computational power: the anonymousport-numbering model (a weak model) and the supported
LOCAL model (a strong model).In the deterministic setting, the correspondence between these models and time-local onlinealgorithms is summarized in Table 1. . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 13
Let G = ( V, E ) be a graph that represents the communication topology of a distributedsystem consisting of n nodes V = { v , . . . , v n } . Each node v i ∈ V corresponds to a processorand the edges denote direct communication links between processors, i.e., any pair of nodesconnected by an edge can directly communicate with each other. In this work, G will alwaysbe a path of length n with the set of edges given by E = {{ v i , v i +1 } : 1 ≤ i < n } . Synchronous distributed computation.
We start with the basic synchronous message-passing model of computation. Let X and Y be the set of input and output labels, respectively,The input is the vector x = ( x , . . . , x n ) ∈ X n , where x i is the local input of node v i . Initially,each node v i only knows its local input x i ∈ X .The computation proceeds in synchronous rounds, where in each round t = 1 , , . . . , allnodes in parallel perform the following in lock-step: send messages to their neighbors, receive messages from their neighbors, and update their local state.An algorithm has running time T if at the end of round T , each node v i halts and declares itsown local output value y i . The output of the algorithm is the vector y = ( y , . . . , y n ) ∈ Y n .Note that—since there is no restriction on message sizes—every T -round algorithm canbe represented as a simple full-information algorithm: In every round, each node broadcastsall the information it currently has, i.e., its own local input and inputs it has received fromothers, to all of its neighbors. After executing this algorithm for T rounds, this algorithmhas obtained all the information any T -round algorithm can. Thus, every T -round algorithmcan be represented as map from radius- T neighborhoods to output values. The distributed computing literature has extensively studied the computational power ofdifferent variants of the above basic model of graph algorithms. The variants are obtainedby considering different types of symmetry-breaking information : in addition to the problemspecific local input x i ∈ X , each node v i also receives some input z i that encodes additionalmodel-dependent symmetry-breaking information.We will now discuss four such models in increasing order of computational power. Thecorrespondence between these models and time-local online algorithms is summarized byTable 1. The port-numbering model PN on directed paths.
In the PN model [1,2,45] all nodes areanonymous, but the edges of G are consistently oriented from v i towards v i +1 for all 1 ≤ i < n .The nodes know their degree and can distinguish between the incoming and outgoing edges.The orientation only serves as symmetry-breaking information; the communication links arebidirectional.Any deterministic algorithm in this model corresponds to a map A : X T +1 → Y suchthat the output of node v i for 1 ≤ i ≤ n is y i = A ( x i − T , . . . , x i , . . . , x i + T ) , where we let x j = ⊥ for any j < j > n (the ⊥ values are used in the scenarios wherenodes near the endpoints of the path observe these endpoints). Note that this is exactly thedefinition of an unclocked [ T, T ]-local algorithms (Definition 3.5).
The LOCAL model on directed paths.
In the
LOCAL model [33, 39] each node receivesthe same information as in the port-numbering model PN , but in addition, each node v i isalso given a unique identifier ID( v i ) from the set { , . . . , n c } for some constant c ≥
1; thenodes do not know n . Lower bounds for this model also hold in the weaker PN model. The numbered LOCAL model on directed paths.
The numbered
LOCAL model furtherassumes that the unique identifiers have a specific, ordered structure: node v i is given theidentifier ID( v i ) = i as local input in addition to the problem specific input x i ∈ X . Thatis, each node knows its distance from the start of the path. Any deterministic algorithm inthis model corresponds to a map A : X T +1 × N → Y such that the output of node v i for1 ≤ i ≤ n is y i = A ( x i − T , . . . , x i , . . . , x i + T , i ) = A i ( x i − T , . . . , x i , . . . , x i + T ) . Observe that this coincides with clocked [ T, T ]-local algorithms (Definition 3.5).This model is not something that to our knowledge has been studied in the distributedcomputing literature; the name “numbered
LOCAL model” is introduced here. However, it isvery close to another model, so-called supported
LOCAL model, which has been studied inthe literature.
The supported LOCAL model on directed paths.
The supported
LOCAL model [26, 41]is the same as the numbered
LOCAL model, but each node is also given the length n of thepath as local input. This would correspond to clocked [ T, T ]-local algorithms that also knowthe length of the input in advance but do not see the full input. This is the most powerfulmodel, and hence, all impossibility results in this model also hold for all the previous models.
One of the key challenges in distributed graph algorithms is local symmetry breaking: twoadjacent nodes in a graph (here: two consecutive nodes along the path) have got isomorphiclocal neighborhoods but are expected to produce different outputs.In distributed computing, a canonical example is the vertex coloring problem . Consider,for example, the task of finding a proper coloring with k colors. This is trivial in thesupported and numbered models (node number i can simply output e.g. i mod 2 to producea proper 2-coloring). However, the case of the PN model and the LOCAL model is a lot moreinteresting.One can use simple arguments based on local indistinguishability [12, 45] to argue thatsuch tasks are not solvable in o ( n ) rounds in the PN model. In brief, if two nodes haveidentical radius- T neighborhoods, then they will produce the same output in any deterministic PN -algorithm that runs in T rounds. For example, it immediately follows k -coloring for any k requires Ω( n ) rounds in the deterministic PN model.Yet another idea one can exploit in the analysis of symmetry-breaking tasks is rigidity (or, put otherwise, the lack of flexibility ); see e.g. [15, 17]. For example, 2-coloring is a rigidproblem: once the output of one node is fixed, all other nodes have fixed their outputs.Informally, two nodes arbitrarily far from each other need to be able to coordinate theirdecisions—or otherwise there is at least one node between them that produces the wrongoutput. This idea can be used to quickly show that e.g. 2-coloring in the LOCAL model . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 15 requires also Ω( n ) rounds, and this holds even if we consider randomized algorithms (say,Monte Carlo algorithms that are supposed to work w.h.p.).This leaves us with the case of symmetry-breaking tasks that are flexible. A canonicalexample is the 3-coloring problem. Informally, one can fix the colors of any two nodes(sufficiently far from each other), and it is always possible to complete the coloring betweenthem. While the 3-coloring problem requires Ω( n ) rounds in the deterministic PN model,it is a problem that can be solved much faster in the deterministic LOCAL model and alsoin the randomized PN model: the Cole–Vishkin technique [21] can be used to do it in only O (log ∗ n ) rounds. However, what is important for us in this work is that this is also knownto be tight [33, 35]: 3-coloring is not possible in o (log ∗ n ) rounds, not even if we use bothunique identifiers and randomness.Moreover, the same holds for all problems in which the task is to label a path withsome labels from a constant-sized set Y , and arbitrarily long sequences of the same label areforbidden: no such problem can be solved in constant time in the PN or LOCAL model, noteven if one has got access to randomness [16, 33, 35, 36, 43].We will soon see what all of this implies for us, but let us discuss one technicality first:symmetric vs. asymmetric horizons.
While the standard models in distributed computing correspond to symmetric horizons ([
T, T ]-local algorithms) and the study of online algorithms is typically interested in asymmetric horizons (e.g. [ T, − a, b ]-localalgorithm A for solving Π. Now for any constant c one can construct an [ a + c, b − c ]-localalgorithm A ′ that solves the same problem. In essence, node x i in algorithm A ′ simplyoutputs A ( x i − a − c , . . . , x i + b − c ). Now if one compares the outputs of A ′ and A , we producethe same sequence of colors but shifted by c steps. This is the standard trick one uses toconvert algorithms for directed paths into algorithms for rooted trees and vice versa; seee.g. [17, 40]. The only caveat is that we need to worry about what to do near the boundaries,but for our purposes the very first and the very last outputs are usually inconsequential(can be handled by an ad hoc rule, or simply ignored thanks to the additive constant in thedefinition of the competitive ratio).Hence, in essence everything that we know about symmetry-breaking tasks in the contextof [ T, T ]-local algorithms can be easily translated into equivalent results for [ T ′ , − T ′ = 2 T + 1, and vice versa. So far we have discussed distributed graph problems in which the task is to find any feasible solution subject to some local constraints. However, especially in the context ofonline algorithms, we are usually interested in finding good solutions. Typical examples areproblems such as the task of finding the minimum dominating set problem and the maximumindependent set problem.These are not, strictly speaking, symmetry-breaking tasks. Nevertheless, it turns out tobe useful to look at also such tasks through the lens of symmetry breaking. In brief, thefollowing picture emerges [23, 27, 36, 43]:
Deterministic O (1)-round LOCAL -model algorithms are not any more powerful thandeterministic O (1)-round PN -model algorithms.Randomized O (1)-round algorithms are strictly stronger than deterministic O (1)-roundalgorithms.For example, if we look at the minimum dominating set problem in unweighted paths, theonly possible deterministic O (1)-round PN -algorithm produces a constant output: all nodes(except possibly some nodes near the boundaries) are part of the solution. Deterministic O (1)-round LOCAL -algorithm can try to do something much more clever, with the help ofunique identifiers, but a Ramsey-type argument [23, 27, 36] shows that it is futile: therealways exists an adversarial assignment of unique identifiers such that the algorithm producesa near-constant output for all but ϵn many nodes, for an arbitrarily small ϵ >
0. However,randomized algorithms can do much better (at least on average); to give a simple example,consider an algorithm that first takes each node with some fixed probability 0 < p <
1, andthen adds the nodes that were not yet dominated. Finally, in the numbered and supportedmodels one can obviously do much better, even deterministically (simply pick every thirdnode).This is now enough background on the most relevant results related to T -round algorithmsin deterministic and randomized PN and LOCAL models.
It turns out to be highly beneficial to try to classify online problems in the above terms:whether there is a component that is equivalent to a symmetry-breaking task or to a nontrivialdistributed optimization problem. This is easiest to explain through examples:
Online file migration (Example 3.1).
This problem is trivial to solve for a constant input;the same also holds for any input sequence that is strictly periodic. Indeed, if the adversarygives a long sequence of constant inputs (or follows a fixed periodic pattern), it only helpsus. Hence none of the above obstacles are in our way; interesting inputs are sequences thatalready break symmetry locally. Furthermore, as we also know that this is a well-knownonline problem solvable with the full history, we would expect that there is also an unclockedtime-local algorithm for solving the task, with a nontrivial competitive ratio. While this is a heuristic argument (based on the lack of specific obstacles), we will see in Section 8 that theargument works very well in this case.
Online load balancing (Example 3.2).
This problem is fundamentally different from thefile migration problem. Let us assume that the algorithm needs to output the action (onwhich machine to schedule the current job). Consider an input sequence that consists of theconstant value 2. In such a case, there is an optimal solution that alternately assigns the2-unit jobs to the two machines, ensuring that the load of any machine at any time is exactly1. But this means that an optimal algorithm has to turn the constant input 2 , , , , . . . into a strictly alternating sequence like 1 , , , , . . . . Any deviation from it will result atleast momentarily in a load of 2. Hence in an optimal solution we need to at least solve the2-coloring problem within each segment of such constant inputs. As we discussed, this is notpossible in the PN or LOCAL model in O (1) rounds, not even with the help of randomness;it follows that there certainly is no optimal unclocked time-local online algorithm, with anyconstant horizon T . Optimal solutions have to resort to the clock.However, this does not prevent us from solving the problem with a finite competitiveratio. Indeed, even the trivial solution that outputs always 1 will result in a maximum load . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 17 that is at most 2 times as high as optimal.Furthermore, if we were not interested in the maximum load but the average load, wearrive at a task that is, in essence, a distributed optimization problem. Unclocked randomized time-local algorithms may then have an advantage over unclocked deterministic time-localalgorithms, and indeed this turns out to be the case here: simply choosing the machine atrandom is already better on average than assigning all tasks to the same machine. On a more general level, the above discussion also leads to the following observation: thedefinition of unclocked time-local algorithms is robust . Now it coincides with the PN model,but even if one tried to strengthen it so that its expressive power was closer to the LOCAL model, very little would change in terms of the results.Conversely, if one weakened the clocked model so that e.g. the clock values are notincreasing by one but they are only a sequence of monotone, polynomially-bounded timestamps, we would arrive at a model very similar to the
LOCAL model, and as we have seenabove, time-local algorithms in such a model cannot solve symmetry-breaking tasks anybetter than in the unclocked model. Hence in order to capture the idea of a model thatis strictly more powerful than the unclocked model, it is not sufficient to have a definitionin which the clock values are merely monotone and polynomially bounded, but one has tofurther require e.g. that the clock values increase at each step at most by a constant. (Sucha model with constant-bounded clock increments would indeed be a meaningful alternative,and it would fall in its expressive power strictly between our unclocked and clocked models.It would be strong enough to solve 3-coloring but not strong enough to solve 2-coloring in atime-local fashion. We do not explore this variant further, but it may be an interesting topicfor further research, especially when comparing its power with randomized PN algorithms.) In this section, we examine the power of clocked time-local algorithms. First, we show thatclocked time-local algorithms can be powerful, despite their limited access to the input: formany problems, competitive classic online algorithms can be converted into competitiveclocked time-local algorithms. Second, we complement the first result by giving an exampleof a classic online problem that does admit a competitive clocked time-local algorithms,despite having competitive classic online algorithms.
We now show that for a large class of online problems the following result holds: if theproblem admits a deterministic classic online algorithm with competitive ratio c , then forany given constant ε > ε ) c for some constant horizon T .The proof follows a similar structure as the constructive derandomization proof of Ben-David et al. [8, Section 4] for classic online algorithms: we chop the input sequence into shortsegments and show that under certain assumptions both the offline and competitive onlinealgorithms pay roughly the same cost. However, some care is needed to adapt the proofstrategy, as in the case of time-local algorithms, we can only use constant-size segments. Bounded Minimization Games.
We now define the class of request-answer games for whichwe prove our result. A (minimization) game is monotone if for all n ∈ N f n +1 ( x · x, y · y ) ≥ f n ( x , y ) for all x ∈ X n , y ∈ Y n , x ∈ X, y ∈ Y. That is, the cost cannot decrease when extending the input-output sequence. We say that amonotone game has bounded delay if for every h ∈ R the set L ( h ) = { x ∈ [ n =1 X n : opt ( x ) ≤ h } is finite (sometimes this property is called locality [8]). That is, there cannot be arbitrarilylong sequences of a fixed cost: eventually the cost of any sequence must increase. Finally,the diameter of the game is D = sup n(cid:12)(cid:12) f ( x · x ′ , y · y ′ ) − f ( x , y ) − f ( x ′ , y ′ ) (cid:12)(cid:12) : ( x , y ) , ( x ′ , y ′ ) ∈ [ n> X n × Y n o . We define that a bounded monotone minimization game is a monotone minimization gamethat has bounded delay, finite diameter, and finite input set X . Note that bounded monotoneminimization games are not necessarily local optimization problems as defined in Section 3.1.The latter are monotone games with finite diameter, but they do not necessarily have boundeddelay. The following result holds for deterministic algorithms: ▶ Theorem 5.1.
Let F be a bounded monotone minimization game. If there exists an onlinealgorithm A with competitive ratio c for F , then for any constant ε > there exists someconstant T and a clocked T -time-local algorithm B with competitive ratio (1 + ε ) c for F . Proof.
Since A has competitive ratio c , then there exists some constant d such that for everyinput x the output y = A ( x ) satisfies f ( x , y ) ≤ c · opt ( x ) + d . Let D be the diameter of thegame and fix δ = 2 ε H = (cid:18) δδ (cid:19) · max { d, D } . Since the game has bounded delay, we have that L ( H ) = { x : opt ( x ) ≤ H } and T = max { k + 1 : ( x , . . . , x k ) ∈ L ( H ) } are finite. Note that T is independent of n , as it only depends on H . Observe that since thecost functions are monotone, for all n ≥ T any input sequence x ∈ X n satisfies opt ( x ) ≥ H .We can now construct the clocked time-local algorithm that only sees the T latest inputsand the total number of requests served so far. Let A = ( A i ) i ≥ be the classic onlinealgorithm. The clocked time-local algorithm B is given by sequence ( B j ) j ≥ , where B T k + i ( z , . . . , z T ) = A i ( z , . . . , z i ) for 1 ≤ i ≤ T and k ≥ . That is, the clocked time local algorithm B simulates the classic online algorithm A byresetting it every time T inputs have been served since the last reset.We now analyze the clocked time local algorithm B . For any n ∈ N , let x ∈ X n be someinput sequence and y ∈ Y n be the output of B on the input sequence x . Let x (1) , . . . , x ( k )be the subsequences of x , where x (1) denote the first T inputs, x (2), denote the next T inputs, and so on. Define the shorthand C ( i ) = opt ( x ( i )) for each 1 ≤ i ≤ k . Note that . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 19 C ( i ) ≥ H for each 1 ≤ i < k . The last subsequence x ( k ) may consist of fewer than T inputs,so we have no lower bound for C ( k ). For 1 ≤ i < k , we get that C ( i ) − D ≥ (cid:18)
22 + δ (cid:19) · C ( i ) and D + d ≤ (cid:18) δ δ (cid:19) · C ( i )by applying the fact that C ( i ) ≥ H and the definition of H .By repeatedly applying the definition of diameter, we get that the optimum offline solutionis lower bounded by opt ( x ) ≥ C (1) + k X i =2 ( C ( i ) − D ) ≥ k X i =1 ( C ( i ) − D ) ≥ (cid:18)
22 + δ (cid:19) k X i =1 C ( i ) . Since A has competitive ratio c , the output of B has cost f ( x , B ( x )) ≤ c · C (1) + d + k X i =2 ( c · C ( i ) + d + D ) ≤ (cid:18) c + 2 δ δ (cid:19) k X i =1 C ( i ) + d. Now using the lower bound on opt ( x ) and the definition of δ , we get that the output of B has cost bounded by f ( x , B ( x )) ≤ (cid:18) c + 2 δ δ (cid:19) k X i =1 C ( i ) + d ≤ (cid:18) c + 2 δ δ (cid:19) · (cid:18) δ (cid:19) opt ( x ) + d = c (cid:18) δ (cid:19) + δ + d = (1 + ε ) c + d. ◀ If the assumptions of Theorem 5.1 are not satisfied, how badly can clocked time-localalgorithms perform? We now give an example of an online problem for which no time-localalgorithm is competitive, despite the existence of competitive classic online algorithms: the online caching problem [42].In this problem, the input set X = { , . . . , m } coincides with the set of elements that canbe stored in the cache of size k , and the set Y = { A ⊆ X : | A | ≤ k } of outputs correspondsto the set of files currently stored in the cache. The result holds already for universe of size m = 3 and cache of size k = 2. ▶ Theorem 5.2.
There is no deterministic clocked time-local algorithm for online cachingwith cache size k = 2 that has finite competitive ratio. Proof.
Consider the caching problem with the cache size k = 2 and a universe of m = 3elements X = { a , b , c } . Let A be any clocked time-local algorithm with horizon T = O (1).We say that A is decisive if on the infinite sequence a ∗ there is some t such that y t ′ = y t for all t ′ > t . Otherwise, A is indecisive ; note that any indecisive deterministic algorithm A must be a clocked algorithm.We distinguish two cases. First, suppose that A is indecisive and consider an inputsequence family I := { a L : L ∈ N } . Intuitively, an indecisive algorithm changes its outputinfinitely many times on the infinite sequence consisting only of requests to a . Clearly, theoptimal offline solution on any input x ∈ I is constant. However, since A is indecisive, A may incur an arbitrarily large cost on requests from this family, due to an arbitrary number ofchanges in the output, each causing a cache reconfiguration. Thus, any indecisive algorithmhas an unbounded competitive ratio.Second, suppose that A is decisive. A crucial observation is that for a fixed deterministictime-local algorithm, the following holds: if two inputs share a subsequence a T at a time step τ , then A produces the same output at τ on both. Therefore, a decisive algorithm eventuallysettles on a fixed output on a ∗ , and consequently it also eventually settles for a fixed outputwhen its visible horizon is a T on any input from I ′ . Formally, there exists a time τ so that A after time τ always outputs a fixed configuration when faced with T many requests to a .We refer to this configuration as the default configuration, and w.l.o.g. we assume that thedefault configuration is { a , b } .Consider an input sequence family I ′ := { ( a T c ) L : L ∈ N } . Starting from τ , A outputsthe default configuration { a , b } after seeing any sequence of T requests to a . Hence, after τ , each request to c incurs the cost at least 1, and this accumulated cost can be arbitrarilylarge (it grows with the length of the sequence). The optimal solution for all inputs in I ′ isto move to the configuration { a , c } at the beginning, where all requests are free, hence theoptimal offline solution incurs a constant cost. Thus, A is not finitely competitive on I ′ . ◀ We note that the above result relies upon a specific, yet natural, choice of the requestand answer set in our formulation, coinciding with the configuration set. However, otherencodings may admit competitive time-local algorithms. For a discussion on the choice ofthe request and answer set and their influence on the competitive ratio, see Section 8.4.We note that similar arguments can establish the hardness of a wide range of metrical tasksystems, and many k -server problem variants. Moreover, the theorem can be strengthenedto show intractability for a wider class of randomized clocked time-local online algorithms;the reasoning is equivalent to the proof Theorem 8.1. In this section, we define randomized time-local algorithms. In the classic online setting,there are two equivalent ways of describing randomized algorithms:at the start, randomly sample an algorithm from a set of deterministic algorithms, orat each step, make a random decision based on coin flips.The former corresponds to mixed strategies , where we sample all random bits used bythe algorithm before seeing any of the input, whereas the latter corresponds to behavioralstrategies , where the algorithm generates random bits along the way as it needs them.
Mixed vs. Behavioral Strategies in Time-Local Algorithms.
The above two characteri-zations are equivalent in classic online algorithms [13]: to simulate a behavioral strategywith a mixed strategy, we can generate an infinite sequence ( r i ) i ≥ of random bit stringsin advance and use the random bits given by r i in step i . Conversely, we can choose to flipcoins only at the beginning and store the outcomes in memory and refer to them consistentlyat later steps.In contrast, for time-local algorithms, the behavioral and mixed strategies differ in a waywe can exploit randomness, and each type of strategy brings distinct advantages. If we usea behavioral strategy, at each step the algorithm can make coin flips that are independentof the previous coin flips. This enables algorithmic strategies that can e.g. break ties in anindependent manner in successive steps. If we use a mixed strategy, we commit to a randomlychosen (consistent) strategy: the initial random choice influences all outputs. In a sense, . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 21 in time-local algorithms, behavioral strategies correspond to private randomness availableat each step i , whereas mixed strategies correspond to shared randomness across the wholesequence. Interestingly, in the time-local setting, it is also natural to consider a combinationof both : we choose a behavioral time-local strategy at random.With this in mind, we arrive at three natural definitions of randomized time-localalgorithms: behavioral strategy time-local algorithms, mixed strategy time-local algorithms, and general strategy time-local algorithms that use a combination of both.We now give formal definitions for each class of randomized time-local algorithms. ▶ Definition 6.1 (behavioral local algorithms) . A behavioral [ a, b ] -local algorithm is given bythe sequence of maps ( A i ) i ≥ of the form A i : X a + b × [0 , → Y , where the output is given by y i = A i ( x i − a , . . . , x i + b , r i ) , where ( r i ) i ≥ is a sequence of i.i.d. real values sampled uniformly from the unit range. If A i = A j for all i, j , then the algorithm is unclocked. Otherwise, it is clocked. ▶ Definition 6.2 (mixed local algorithms) . Let D be a nonempty set of (deterministic) [ a, b ] -local algorithms. A mixed [ a, b ] -local algorithm over D is a probability measure A : D → [0 , over D . The output of A on input x is the random vector y = P ( x ) , where P isa deterministic time-local algorithm sampled from D according to A . If D is a subset of allunclocked [ a, b ] -local algorithms, then A is unclocked. If D is a subset of all clocked [ a, b ] -localalgorithms, then A is clocked. ▶ Definition 6.3 (general randomized local algorithms) . A general randomized unclocked [ a, b ] -local algorithm is a mixed [ a, b ] -local algorithm over the set of unclocked behavioral [ a, b ] -local algorithms. Observe that in the case of clocked algorithms, general randomized time-local algorithmscoincide with mixed clocked time-local algorithms, as the former can be simulated by thelatter. To this end, we can generate an infinite sequence ( r i ) i ≥ of random bit strings inadvance and store them in functions A i of the deterministic clocked [ a, b ]-local algorithms,and use the random bits given by r i in step i . However, this is not the case with unclockedalgorithms: as time-local algorithms do not have memory to store past random outcomes,it is impossible to directly simulate mixed time-local algorithms by a behavioral time-localalgorithm that flips coins only at the beginning. Adversaries and the Expected Competitive Ratio.
We naturally extend the notion ofcompetitiveness to randomized algorithms. For randomized algorithms, the answer sequenceand the cost of an algorithm is a random variable. We will abuse the notation slightly to let y = A ( x ) denote the random output generated by a randomized algorithm A on input x .We say that a randomized online algorithm A for a game defined with cost functions( f n ) n ≥ is c -competitive if E [ f n ( x , A ( x ))] ≤ c · opt ( x ) + d for any input sequence x and a fixed constant d . The input sequence and the benchmark solu-tion opt is generated by an adversary. We distinguish between the notion of competitivenessagainst various adversaries, having different knowledge about A and different knowledge while producing the solution opt . Competitive ratios for a given problem may vary depending onthe power of the adversary.An oblivious offline adversary must produce an input sequence in advance, merelyknowing the description of the algorithm it competes against (in particular, it may haveaccess to probability distributions that the algorithm uses, but not the random outcomes),and pays an optimal offline cost for the sequence. An adaptive online adversary producesan input sequence based on the actions of the algorithm, and serves this request sequenceonline. An adaptive offline adversary produces an input sequence based on the actions of thealgorithm, and pays an optimal offline cost for the sequence. For a comprehensive overviewof adversary types, see [13].Later in this paper, we present time-local algorithms that are competitive against theoblivious offline (cf. Section 8.2) and the adaptive online adversary (cf. Section 8.4). Wenote an interesting question regarding the adaptive offline adversary in the time-local setting.A well-known result in classic online algorithms states that if there exists a c -competitiverandomized algorithm against it, then there exists a deterministic c -competitive algorithm,for any c [8]. Does the existence of a competitive randomized time-local algorithm againstthe adaptive offline adversary imply the existence of any competitive deterministic time-local algorithm? In this section, we describe a technique for automated design of time-local algorithms for localoptimization problems. This technique allows us to automatically obtain both upper andlower bounds for unclocked time-local algorithms. In particular, for deterministic algorithms,we can synthesize optimal algorithms. We also discuss how to extend our approach torandomized algorithms. As our case study problem, we use the simplified variant of onlinefile migration.
We now assume that the input and output sets X and Y are finite. Recall that an unclockedtime-local algorithm that has access to last T inputs is given by a map A : X T → Y . Thesynthesis task is as follows: given the length T ∈ N of the input horizon, find a map A thatminimizes the competitive ratio. For simplicity of presentation, we will ignore short instancesof length n < T , as short input sequences do not influence the competitive ratio. The Synthesis Method.
The high-level idea of our synthesis approach is simple: Iterate through all of the algorithm candidates in the set A = { X T → Y } . Compute the competitive ratio c ( A ) for each algorithm A ∈ A . Choose the algorithm A that minimizes the competitive ratio.Given that the input and output sets X and Y are finite, the set A of algorithms is alsofinite: there are exactly | Y | | X | T algorithms we need to check. Evaluating the Competitive Ratio.
Obviously, the challenging part is implementing thesecond step, i.e., computing the competitive ratio of a given algorithm A . A priori it mayseem that we would need to consider infinitely many input strings in order to determine thecompetitive ratio of the algorithm. However, for any local optimization problem Π with finiteinput and output sets, it turns out that we can capture the competitive ratio by analyzing afinite combinatorial object. . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 23 We show that for any time-local algorithm A , we can construct a (finite) weighted,directed graph G (Π , A ) that captures the costs of output sequences as walks in G (Π , A ). Thecost of any unclocked time-local algorithm on adversarial input sequences can be obtainedby evaluating the weight of all cycles defined in this graph G (Π , A ). We will now describe how to construct the graph G (Π , A ) for a given local optimizationproblem Π and an unclocked local algorithm A . For the sake of simplicity, we only considerthe sum aggregation function; the construction for min and max aggregation is definedanalogously.Let r ∈ N be the horizon of the local optimization problem Π, v the valuation function ofΠ, and A : X T → Y be the unclocked time-local algorithm. To avoid unnecessary notationalclutter, we describe the construction for r = 1; however, the construction is straightforwardto generalize. The Dual de Bruijn Graph.
We construct a directed graph G = ( V, E ) on the set of vertices V = X T × Y . For any x = ( x , . . . , x k ), we define s ( x , a ) = ( x , . . . , x k , a ) to be the successorof x on a . For each vertex ( a , y ) ∈ V , there is a directed edge towards the vertex ( a ′ , y ′ ) ∈ V ,where for all y ′ ∈ Y , a ′ = s ( a , x ) and x ∈ X . Note that there are self-loops in this graph.The idea is that for any sufficiently long input n ≥ T , an input sequence x ∈ X n and an output sequence y ∈ Y n define a walk ρ ( x , y ) in the graph G . After the timestep i ≥ T , we are at vertex ( x i − T +1 , . . . , x i , y i ) ∈ V and the next vertex is given by( x i − T +2 , . . . , x i +1 , y i +1 ) ∈ V . In particular, from any walk ρ we can obtain the followingsequences:an input sequence x ( ρ ) = ( x , . . . , x n ) ∈ X n ,some (possibly optimal) solution y ∗ ( ρ ) = ( y , . . . , y n ) for x ( ρ ), andthe output y ( ρ ) = A ( x ( ρ )) given by the algorithm on x ( ρ ).Vice versa, any pair of input x and output y ∗ sequences defines a walk ρ ( x , y ∗ ) in G . Assigning the Costs.
For each edge e ∈ E in the graph, we assign two costs for the edge:the first describes the cost paid by some (possibly optimal) output, and the second, the costpaid by the algorithm A . Recall that for a local optimization problem Π, the costs are givenby the local cost function v : X r +1 × Y r +1 → R ∪ {∞} . For the case r = 1, the function v takes 4 parameters.Consider an edge e = (( a , b ) , ( a ′ , b ′ )) ∈ E , where a ′ = ( a , . . . , a T , x ) for some x ∈ X . Wenow define the adversary cost w ( e ) and algorithm cost q ( e ) of the edge e . We define w ( e ) = v ( a T , x, b, b ′ ) is the cost paid output b ′ on input x,q ( e ) = v ( a T , x, b, A ( a )) is the cost paid by the output of the algorithm on input x, where v : X × Y → R ∪ {∞} is the valuation function of the problem Π.We note that the costs generalize to arbitrary r > V = X T + r × Y r to accommodate the larger horizon used for the local cost function. The Cost Ratio of a Walk.
Finally, for any walk ρ = ( v , . . . , v k ) in G , we define w ( ρ ) = k − X i =1 w ( v i , v i +1 ) , q ( ρ ) = k − X i =1 q ( v i , v i +1 ) . Here w ( ρ ) and q ( ρ ) define the total adversary and algorithm costs for the walk ρ . The costratio of a walk ρ is defined as r ( ρ ) = q ( ρ ) /w ( ρ ) if w ( ρ ) >
01 if q ( ρ ) = w ( ρ ) = 0 ∞ otherwise.That is, on input x ( ρ ) the algorithm A will pay a cost of q ( ρ ) + O (1), whereas the optimumsolution has cost at most w ( ρ ) + O (1); there is a constant overhead on the costs since weignore the costs incurred during the first T − r = O (1) inputs. Bounding the Competitive Ratio.
We now show that we can compute the competitiveratio of the algorithm A using the graph G = G (Π , A ). We say a walk ρ = ( v , . . . , v k ) ∈ V k is closed if its starts and ends in the same vertex v = v k . A directed cycle is a closed walkthat is non-repeating, i.e., v i ̸ = v j for all 1 ≤ i ≤ j < k . ▶ Theorem 7.1.
The competitive ratio of algorithm A is max { r ( ρ ) : ρ is a directed cycle of G } . Figure 3 gives an example of the dual de Bruijn graph for the online file migration problem(Example 3.1) and an algorithm with local horizon T = 2.To prove the above theorem, we introduce three lemmas and the following definitions. A closed extension of a walk ρ is a closed walk ρ ′ that contains ρ as a prefix. A subwalk ρ ′ of awalk ρ = ( v , . . . , v k ) is a subsequence ( v i , . . . , v j ) for some 1 ≤ i ≤ j ≤ k . A decompositionof ρ into L subwalks is a sequence of subwalks ρ , . . . , ρ L of ρ such that their concatenation ρ = ρ · · · ρ L . ▶ Lemma 7.2.
Let ρ be a walk in G . For any decomposition of ρ into L subwalks ρ · · · ρ L ,there exists some ≤ i ≤ L such that r ( ρ i ) ≥ r ( ρ ) . Proof.
Let π be a permutation on { , . . . , L } and τ i = ρ π ( i ) such that r ( τ ) ≤ r ( τ ) ≤ · · · ≤ r ( τ L ) . Moreover, for 1 ≤ i ≤ L we define r ( τ i ) = q i /w i , where q i = q ( τ i ) and w i = w ( τ i ). We usethe shorthands Q ( i ) = P ij =1 q j and W ( i ) = P ij =1 w j . Note that for the aggregate cost ratiofor a local optimization problem using the sum as its aggregation function gives that r ( ρ ) = Q ( L ) W ( L ) = P ij =1 q j P ij =1 w j . We now show by induction that for all 1 ≤ i ≤ L we have that r ( τ i ) = q i w i ≥ Q ( i ) W ( i ) . Observe that this implies that r ( ρ π ( L ) ) = r ( τ L ) ≥ Q ( L ) /W ( L ) = r ( ρ ).The base case i = 1 is vacuous. For the inductive step, assume that the claim holds forsome 1 ≤ i < L . For the sake of contradiction, assume that claim does not hold for i + 1, i.e., r ( τ i +1 ) = q i +1 w i +1 < Q ( i + 1) W ( i + 1) . . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 25 input:optimum:algorithm: 110110110…111111111…, cost 1+0, 0+0, 0+0, 1+0, …111001001…, cost 1+α, 1+0, 1+α, 1+α, …00 : 1 : 0 01 : 1 : 010 : 1 : 011 : 1 : 100 : 0 : 001 : 0 : 0 10 : 0 : 011 : 0 : 1 input : optimum : algorithmoptimum costalgorithm cost (mismatch + switch) time t time t + 1 Notation: input …01: output 0input …00: output 0input …10: output 0 input …11: output 1
Algorithm candidate:
Figure 3
Dual de Bruijn graph for T = 2. The highlighted cycle shows how an adversary canforce the candidate algorithm to pay 3 + 2 α when optimum pays only 1; hence this specific time-localalgorithm cannot be better than (3 + 2 α )-competitive. By rearranging the terms, we get Q ( i + 1) w i +1 − W ( i + 1) q i +1 W ( i + 1) > , which in turn implies that Q ( i + 1) · w i +1 > W ( i + 1) · q i +1 holds. Now observing that Q ( i ) w i +1 + q i +1 w i +1 = Q ( i + 1) w i +1 > W ( i + 1) · q i +1 = W ( i ) q i +1 + w i +1 q i +1 , we get that r ( τ i +1 ) = q i +1 w i +1 < Q ( i ) W ( i ) ≤ q i w i = r ( τ i ) , where the second inequality follows from the induction assumption. However, this contradictsthe fact that τ , . . . , τ L were ordered according to increasing cost ratio. ◀▶ Lemma 7.3.
Let ρ be a directed cycle in G . The competitive ratio of A is at least r ( ρ ) . Proof.
Recall that the cycle defines an input sequence x = x ( ρ ). By definition, the algorithmhas cost at least q ( ρ ) on this input sequence, whereas the optimum solution has cost atmost w ( ρ ) + d for some constant d . Thus, the algorithm has a cost of at least q ( ρ ) ≥ r ( ρ )( w ( ρ ) + d ) ≥ r ( ρ ) · opt ( x ) + O (1). ◀▶ Lemma 7.4.
If the competitive ratio of A is greater than c + ε for some ε > , then thereexists a directed cycle ρ in G with cost ratio r ( ρ ) > c . Proof.
For any given walk ρ in G , let ˆ ρ be the shortest closed extension of ρ that minimizesthe cost of y ∗ (ˆ ρ ). Note there may be multiple shortest closed extensions, so we pick one withthe cheapest adversarial cost. We let ˆ ρ \ ρ denote the suffix of ˆ ρ that satisfies ˆ ρ = ρ · (ˆ ρ \ ρ ).Define δ = max { w (ˆ ρ \ ρ ) : ρ is a walk in G } . Note that δ is a constant, since ˆ ρ is a minimal closed extension of ρ and G is finite.Let x be an input sequence and y ∗ an optimal output sequence. For the walk ρ = ρ ( x , y ∗ ),we have that r (ˆ ρ ) = q (ˆ ρ ) w (ˆ ρ ) ≥ q ( ρ ) + q (ˆ ρ \ ρ ) w ( ρ ) + w (ˆ ρ \ ρ ) ≥ q ( ρ ) + δw ( ρ ) + δ , since w ( ρ ) ≤ q ( ρ ), as the cost of the algorithm is never less than the cost of the optimal solution y ∗ . Asymptotically, as the length of the walk goes to infinity, we have that r (ˆ ρ ) = r ( ρ ) − o (1).In particular, for any constant ε > n such that all input sequences x of length n ≥ n , the walk ρ = ρ ( x , y ∗ ) given by x and the optimal output sequence y ∗ , satisfies r (ˆ ρ ) ≥ r ( ρ ) − ε . By assumption A had a competitive ratio of at least c + ε . We can pick a sufficiently longinput sequence x and an optimal solution y ∗ such that ρ = ρ ( x , y ∗ ) satisfies r (ˆ ρ ) ≥ r ( ρ ) − ε ≥ f n ( x , A ( x )) opt ( x ) − ε ′ − ε ≥ c − ε ′ − ε + ε > c, where f n ( x , A ( x )) denotes the cost of the algorithm A on input x and ε and ε ′ are appro-priately chosen constants. Thus, we have now obtained a closed walk ˆ ρ with r (ˆ ρ ) ≥ c . Sincewe can decompose ˆ ρ into a sequence ˆ ρ , . . . , ˆ ρ K of directed cycles, by applying Lemma 7.2we get that some directed cycle ˆ ρ i satisfies r (ˆ ρ ) > c , as claimed. ◀ . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 27 Proof of Theorem 7.1.
The above two lemmas yield that the competitive ratio of A isat least as large as the cost-ratio of some directed cycle in G (Lemma 7.3), andat most as large as the cost-ratio of some directed cycle in G (Lemma 7.4).Thus, the directed cycle with the highest cost-ratio determines the competitive ratio of thealgorithm A . Since the graph G is finite, it suffices to check all directed cycles of G todetermine the competitive ratio of A . We now consider the case study problem of online file migration with X = Y = { , } and α >
0. Recall that Figure 3 gives an example of graph G for this problem for T = 2. First, wediscuss some optimizations and extensions to the synthesis of randomized algorithms. Finally,we overview results obtained using the synthesis framework, including optimal synthesizedalgorithms. We discuss a few techniques for optimizing the synthesis for our case study problem of onlinefile migration. We can reduce the amount of computation needed to find the best algorithm A for a fixed T and α , by eliminating some algorithms. For example, we can often quicklyidentify some simple property of G that immediately disqualifies an algorithm candidate. The Role of Self-Loops.
If the competitive ratio of A is K , then the cost-ratio of anydirected cycle has to be at most K . In particular, the cost-ratio of any directed cycle has tobe finite. So we can directly eliminate all cases in which there is a cycle ρ with adversary-cost w ( ρ ) = 0 and a positive algorithm-cost q ( ρ ) >
0. For example, we can apply this reasoning toself-loops in the graph G . If the adversary-cost of a self-loop is zero, then the algorithm-costof the same loop has to be also zero. It follows that e.g. we must have A (0 , . . . ,
0) = 0and A (1 , . . . ,
1) = 1 for any algorithm A . In the case of T = 3, this reduces the number ofalgorithms that need to be checked from 2 = 256 to only 2 − = 2 = 64 instead. Detecting Heavy Cycles.
When searching for algorithms with best competitive ratio, itis useful to keep track of the best cost-ratio found so far: when checking a new algorithmcandidate A and its corresponding graph G , we can first check small cycles of length at most L to see if any such cycle has cost-ratio larger than the best found cost-ratio for any otheralgorithm so far. If we encounter a cycle ρ with cost-ratio r ( ρ ) that is larger or equal thanthe competitive ratio of some previously considered algorithm A ′ , then we know that thecompetitive ratio of A is larger or equal than that of A ′ . Thus, we can immediately disregard A and move on to check the next possible algorithm candidate.Indeed, it turns out that in many cases, cycles with large cost-ratio are already foundwhen examining only short cycles. However, if high-cost short cycles are not found, we canalways fall back to an exhaustive search that checks all cycles. We note that we can extend our approach to the synthesis of randomized algorithms aswell. Here, the synthesis bounds the expected competitive ratio of the algorithm againstan oblivious randomized adversary. We consider the synthesis for randomized behavioral algorithms (cf. Section 6). Synthesis of mixed algorithms would correspond to finding a good probability distribution over the finite set of algorithms, but we restrict our attention to thebehavioral algorithms.In the case of deterministic algorithms, we considered maps A : { , } T → { , } . Now weconsider maps A : { , } T → [0 , A ( a ) gives the probability that A outputs 1 uponseeing the sequence a ∈ X T of last T inputs. Thus, A ( a ) = Pr[ A outputs 1 on input a ∈ X T ]1 − A ( a ) = Pr[ A outputs 0 on input a ∈ X T ] . We assign the algorithm cost q ( e ) for any edge ( a , y ) to ( s ( a , x ) , y ′ ) as follows:On a mismatch, the algorithm pays the cost q mismatch ( e ) = ( − A ( a ) if x = 1 A ( a ) otherwise.The switching cost is given by q switch ( e ) = α · [ A ( a ) · (1 − A ( s ( a , a ))) + (1 − A ( a )) · A ( s ( a , x ))] . The total cost is q ( e ) = q mismatch ( e ) + q switch ( e ).We calculate the adversary-cost in the same manner as we do in the deterministic model.That is our adversary always outputs 0 or 1 (but not, e.g. 0 . G will havethe same structure as in the deterministic case.Since there are uncountably many possible randomized algorithms A for any T , we insteaddiscretize the probability space into finitely many segments. Thus, we cannot guarantee thatwe find optimal randomized algorithms. Nevertheless, this method can be used to obtainsynthesized algorithms that beat the deterministic algorithms. We now give some results for the online file migration problem obtained using the synthesisapproach. First, we discuss algorithms with small values of T = 1 , , T = 4 and T = 5. T = 1 , , Table 2 summarizes results for T = 1 , , . ≤ α ≤ .
6. For deterministic algorithms,we list the competitive ratios of the optimal deterministic algorithms for the given values ofparameters T and α .For randomized algorithms, we list the best competitive ratios found by the synthesismethod for the given values of T and α . As discussed, the search for randomized algorithmswas conducted in a discretized search space, so it is possible that some randomized algorithmswith better competitive ratios may have been missed by the search method. The Power of Randomness.
Note that already with T = 2 we can obtain algorithms withstrictly better competitive ratios when randomness is used. Moreover, with only T = 3, weare able to obtain randomized algorithms with competitive ratio < α = 1 . any (non-time-local) deterministic algorithm for α = 1. Table 3gives an example of such an algorithm that achieve competitive ratio of roughly 2.67 for T = 3 and α = 1.After checking all cycles in the constructed dual de Bruijn graph, the cycle with themaximum cost-ratio (cost-ratio of about 2.67) happens to be the following: . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 29 Last T inputs: 000, adversary output: 0.Last T inputs: 001, adversary output: 0.Last T inputs: 011, adversary output: 0.Last T inputs: 110, adversary output: 0.Last T inputs: 100, adversary output: 0. T = 4 For T = 4 we can obtain better deterministic algorithms than with T = 3. Interestingly,we can find several optimal algorithms for the case α = 1: even a full-history deterministiconline algorithm cannot achieve a better competitive ratio. Table 4 lists all the 3-competitivealgorithms that exist for parameter values of T = 4, α = 1. This shows that even verysimple time-local algorithms can perform well compared to classic online algorithms. Table 2contains some of the results for T = 4 and 0 . ≤ α ≤ . T = 5 Since the number of cycles to be checked increases exponentially in T , we were not able toobtain any positive results for the case of T = 5. However, negative results could still beobtained, since verifying for a certain lower bound does not require to check all the cyclesfor all the algorithms. Instead, it is sufficient to find at least one cycle with a large enoughcost-ratio to disregard a certain algorithm and move on to the next one. We get the followingresults: ▶ Observation 7.5.
With parameter values of α = 1 and T = 5 , the best competitive ratioremains . That is, for each deterministic algorithm, after the dual de Bruijn graph has beenconstructed, there is a cycle with a cost-ratio of at least . ▶ Observation 7.6.
There is no algorithm with ratio < . for T = 5 and α = 1 . . ▶ Observation 7.7.
There is no algorithm with ratio < . for T = 5 and α = 1 . . We study a variant of online file migration (defined in Section 1.2) in the time-local setting.The case study serves three purposes:to show an example of a problem that admits competitive algorithms in the time-localsetting,to highlight the challenges of algorithm design and analysis present in the time-localsetting and propose techniques used to deal with them,to study the influence of limiting the visible horizon size on degradation of the competitiveratio.We assume the following encoding of the problem: on each request, a time-local algorithmtakes the current visible horizon as input and outputs the next location of the file. Fora discussion of alternative encodings of this problem, see Section 8.4. Unless stated explicitly,we assume that the migration cost α is at least 1.We start by providing insights into techniques which are useful for studying onlineproblems in time-local setting. Next, we present a lower bound showing that the degradationof the competitive ratio is inevitable with the decrease of T , even with access to a global Table 2
The best competitive ratios for some values of α and T ; see also Figure 2. α T = 1 T = 2 T = 3 T = 4deterministic deterministic randomized randomized randomized deterministic0.1 11 11 11 11 11 110.2 6 6 6 6 6 60.3 4.333 4.333 4.333 4.333 4.333 4.3330.4 3.5 3.5 3.5 3.5 3.5 3.50.5 3 3 3 3 3 30.6 3.2 3.2 3.006 3.2 2.934 3.20.7 3.4 3.4 3.055 3.4 2.864 3.40.8 3.6 3.6 3.2 3.6 2.7970.9 3.8 3.8 3.35 3.8 2.734 3.2221.0 4 4 3.5 4 2.672 31.1 4.2 4.2 3.65 4.2 2.772 3.11.2 4.4 4.4 3.8 4.4 2.8721.3 4.6 4.6 3.95 4.6 2.986 3.31.4 4.8 4.8 4.1 4.8 3.0881.5 5 5 4.25 5 3.188 3.51.6 5.2 5.2 4.4 5.2 3.288 Table 3
A randomized algorithm for T = 3 and α = 1 with expected competitive ratio ≈ . T inputs The probability to output 1000 0001 0.3309010 0.2711011 1100 0101 0.7289110 0.6691111 1 . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 31 Table 4
Three 3-competitive algorithms for T = 4, α = 1.Last T inputs Output A A A . . . 0000 0 0 0. . . 0001 0 0 0. . . 0010 0 0 0. . . 0011 1 1 1. . . 0100 0 0 0. . . 0101 0 0 1. . . 0110 1 1 1. . . 0111 1 1 1. . . 1000 0 0 0. . . 1001 0 0 0. . . 1010 1 0 1. . . 1011 1 1 1. . . 1100 0 0 0. . . 1101 1 1 1. . . 1110 1 1 1. . . 1111 1 1 1 clock and using randomization. Then, we discuss an adaptation of a well-known randomizedalgorithm to the time-local setting, and design a competitive deterministic algorithm for2-node networks. Techniques for designing time-local algorithms.
The challenges in designing time-localalgorithms come from two sources: (1) the algorithm can make decisions based on the mostrecent input history only, and (2) the algorithm is unaware of its current configuration. Notethat the latter challenge is not present in memoryless online algorithms [19]. To tackle thesechallenges, we highlight a useful technique of tracking distinguished subsequences of theinput as they recede in the visible horizon. Implementing consistent tracking is simpler inthe clocked setting; we study an example where we track requests issued at certain points intime, initially chosen uniformly at random.Tracking is significantly more challenging to implement in non-clocked setting. Withoutknowing the temporal position of requests, requests from the same node are indistinguishable.We overcome this limitation by tracking distinguishable subsequences of requests instead ofsingle requests, and we specify it in Section 8.3.
Lower bounds under time-local setting.
We are able to reason about a time-local algo-rithm’s performance on different sequences that share the same subsequences of T requests.The algorithm is a function of the last T requests, hence its output on one input is identicalto the output on the second. A common lower bound design technique for classic onlinealgorithms involves reasoning about distributions over inputs with Yao’s principle. Instead,a simpler reasoning about a time-local algorithm may be sufficient in certain situations. Wemay argue about the performance of the algorithm on various relevant inputs independently,and come to conclusions about its performance on every input. In this section, we present a lower bound for time-local algorithms for online file migrationthat shows an inevitable degradation of the competitive ratio when the visible horizonis limited. The following lower bound assumes the length of the visible horizon is given.Hardness for simpler settings is implied, in particular for non-clocked time-local algorithms.In the subsequent Sections 8.2 and 8.3, we present randomized and deterministic algorithmsthat asymptotically match this lower bound. ▶ Theorem 8.1.
Fix any randomized T -time-local clocked algorithm A for online file migrationfor networks with at least nodes. Assume that the file size is α , and α ≥ T . If A is c -competitive against an oblivious offline adversary, then c ≥ α/T . Proof.
To state A ’s properties, we consider two infinite sequence of requests, 0 ∗ and 1 ∗ .Later we will reason about A ’s performance on finite sequences. We say that a deterministicalgorithm is resisting if for each t there exists a time t ′ > t such that the algorithm eitheroutputs 1 at t ′ when faced with 0 ∗ , or outputs 0 at t ′ when faced with 1 ∗ . A time-localalgorithm may be resisting if it has access to a global clock.Recall that A is a distribution over deterministic clocked algorithms. First, assume that A has a resisting strategy in its support. Consider two input sequence families, I := { L : L ∈ N } and I := { L : L ∈ N } . Note that A may incur an arbitrarily large cost on requestsfrom either of these families of inputs, say I , due to an arbitrary number of 0-requestsserved in the configuration 1. The cost of an optimal offline solution on such sequences isconstant: an offline algorithm moves the file at the beginning to the only node that requeststhe file. We conclude that the competitive ratio of A can be arbitrarily large on inputs from I or I .For the rest of this proof, we assume that A does not have any resisting strategy in itssupport. A crucial observation is that for a fixed deterministic time-local algorithm, itsoutput on 0 ∗ (resp. 1 ∗ ) for any time τ determines its output on other sequences that containa sequence of T requests to 0 (resp. 1) at time τ . As A does not have a resisting strategy inits support, there exists a time τ det such that after τ det , all strategies in A ’s support alwaysoutput b when faced with T many requests to b , for b ∈ { , } .Consider an input σ = (1 T T ) L for some L to be determined. Let σ ′ be the subsequenceof σ starting from the first 0-request that comes after τ det . Fix any optimal offline algorithm opt for σ . For σ \ σ ′ , we simply claim A ( σ \ σ ′ ) ≥ opt ( σ \ σ ′ ), where A ( · ) and opt ( · ) denotesthe cost of the algorithm A and an optimal offline algorithm, respectively. To analyze σ ′ , wesplit it into subsegments of (1 T T ) that we refer to as phases . As no strategy in A ’s supportis resisting, and requests from σ ′ come after τ det , A ’s behavior on σ ′ is determined: it mustoutput b faced with b -uniform sequence, for b ∈ { , } . Hence, in each phase A incurs cost2 α . On the other hand, in each phase opt incurs cost at least T — recall that T ≤ α , thuseither it migrated during the phase and paid α ≥ T already, or did not migrate and paid foreither 0-requests or 1-requests. Summing up the above observations and assuming σ ′ haslength 2 T · L ′ , we obtain A ( σ ) opt ( σ ) = A ( σ \ σ ′ ) + A ( σ ′ ) opt ( σ \ σ ′ ) + opt ( σ ′ ) ≥ opt ( σ \ σ ′ ) + 2 α · L ′ opt ( σ \ σ ′ ) + T · L ′ . By choosing a long enough sequence σ (and consequently a large enough L ′ ), the competitiveratio can be arbitrarily close to 2 α/T . ◀ Note that the result presented in this section implies the lower bound for online filemigration on general networks (not necessarily consisting of two vertices). . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 33
In this section we discuss a randomized time-local algorithm for online file migration ingeneral networks. The algorithm is an adaptation of an elegant randomized algorithm byWestbrook [44] as a function of the local time horizon and the global clock. We discuss thecompetitive ratio tradeoffs related to limited view of past requests, and we show that thesetradeoffs are asymptotically optimal (cf. Section 8.1).We assume that the feasible set of answers of any algorithm is coincident with the set ofnodes of the network. For a discussion on possible extensions of such setting, see Section 8.4.
Algorithm Mixed Resetting.
Fix any network N , size of the file α and the size of visible T -horizon. We describe a set of deterministic strategies, each parameterized by an integer k ∈ [1 , T ]. The Mixed Resetting algorithm chooses uniformly at random one of these strategiesat the beginning.Now we describe the deterministic strategy for a fixed k . In the classic setting withstateful algorithms, the algorithm may be described as follows. It maintains a counter,initially set to k . Each time it encounters a request, the counter decreases. When the counterdrops to 0, the algorithm moves the file to the node where the request comes from at themoment, and resets the counter to T . While the counter is positive, the algorithm does notchange the file placement.This deterministic strategy can be implemented in the clocked time-local setting with thelast T requests visible. Let τ be the current value of the clock, and let p, q be integers suchthat τ = T · p + q and q < T . The algorithm’s output at τ is x T − q , the node that requesteda file with the index T − q in the current visible T -horizon. ▶ Theorem 8.2.
The competitive ratio of Mixed Resetting against the oblivious offlineadversary is max { αT , T +12 α } . ▶ Corollary 8.3.
The competitive ratio of Mixed Resetting against the oblivious offlineadversary is ϕ ≈ . for T = α + √ α − α + 1 − , where ϕ is the Golden Ratio. An elegant proof of this theorem was given by Westbrook [44], using a potential functionargument. This yields the best currently known algorithm in general networks against theoblivious adversary; the result is not tight, and the best known lower bound against theoblivious adversary is 2 + 1 / (2 α ) [20].For larger T the competitive ratio would increase, thus in such case we simply truncatethe visible horizon to its optimal value. Note that the ratio of Mixed Resetting is O ( α/T ),and it asymptotically matches the lower bound of Ω( α/T ) from Theorem 8.1. The result inthis section is more general: the network may be arbitrary (more than two vertices), andthe competitive ratio of the time-local algorithm remains unchanged and matches the lowerbound even in arbitrary networks. In this section, we introduce a constant competitive time-local algorithm, the
Sliding Win-dow Algorithm ( alg for short), for the online file migration problem restricted to two nodes,identified by 0 and 1. The algorithm takes the last T requests as input, and it outputs avalue in { , } , the (new) location of the file. For each request 0 or 1 (the node requestingthe file), alg pays a unit cost if its last output does not match the request (i.e., if the file isnot located at the node). In such cases, we say alg incurs a mismatch . After serving the request, alg may choose to migrate the file to the other node (by switching the output) atthe cost α , and in such case we say alg flips its output.The algorithm scans the visible horizon looking for a distinguished subsequence of requests,called a relevant window . It decides its output based on the existence of any relevant window,and an invariable property of the relevant window (if any detected). After a window entersthe visible horizon, alg maintains its latest output as long as (i) the window is containedin the visible horizon (while sliding ), and (ii) it is not succeeded by a more recent relevantwindow. alg may flip its output once either of (i) or (ii) is no longer the case. Intuitively,a relevant window serves as a short-lived memory, enabling alg to maintain the same outputfor as long as the window is contained in the visible horizon. Sliding Window Algorithm.
We define a b -window for b ∈ { , } as a subsequence ofrequests in which the number of b -requests is at least twice the number of ¬ b -requests, where ¬ b = 1 − b . alg outputs 1 only if the most recent b -window in the visible horizon is a1-window. Therefore, it outputs 1 as long as the visible horizon contains a 1-window thatis not succeeded by a (more recent) 0-window. The 1-window slides further to the past asnew requests arrive, until it is no longer contained in the visible horizon. At this time, alg flips back to 0 (as the default output) unless there is a more recent 1-window in the visiblehorizon.We describe the sliding window algorithm formally as follows. Given any T ≥
6, let λ := min {⌈ T / ⌉ , α } ≥ b -window of length 3 λ that enters thevisible horizon at time t by W t := σ ( t − λ, t ]. At any time t , the algorithm alg takes thevisible horizon (the past T requests) as input and outputs either 0 or 1 according to thefollowing rules. Rule 1. alg outputs b ∈ { , } if the most recent window in the visible horizon is a b -window. Rule 2. alg outputs 0 if the visible horizon contains no b -window for b ∈ { , } .Note that whenever alg flips to 0, it is either because the visible horizon containsa 0-window (Rule 1), or because there is no b -window in the visible horizon (Rule 2). Our analysis is based on the observation that alg neither flips too frequently, incurringexcessive reconfiguration cost, nor too conservatively, incurring excessive mismatches. Tothis end, we focus our attention on individual subsequences between two consecutive flipsto 1. We show that any of these subsequences contains sufficiently many 0 and 1-request, sothat an optimal offline algorithm incurs a cost within a factor O ( α/T ) of alg ’s cost. Preliminaries.
We introduce auxiliary definitions and notations that we use in our analysis. alg starts serving requests by outputting 0 and flips for the first time at t first , which is a flipto 1. Let ( l i , r i ) denote the i th pair of time indexes after t first , s.t. alg flips to 0 at l i andto 1 at r i , and let m denote the number of these pairs. For the case m = 0, we let r := t first .After the last pair, alg may perform a last flip to 0 at a time denoted by t last . We denotethe set of times at which alg flips its output by F := { t first , l , r , . . . , l m , r m , t last } . Wepartition the i th phase into two parts as P i = L i R i , where L i := σ ( r i − , l i ] is the left partand R i := σ ( l i , r i ] is the right part.A (sub)segment of σ between times i and j > i is a contiguous subsequence of σ specifiedby σ ( i, j ]. We denote the concatenation of any two consecutive segments S and S by S S . . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 35 We say a segment S is short if | S | < λ , otherwise | S | ≥ λ and it is long . For b ∈ { , } , wedenote the number of b -requests in a segment S by n b ( S ).We compare the cost of our algorithm to the cost of an optimal offline algorithm denotedby opt . The total cost incurred by alg and opt while serving a segment S is denoted by alg ( S ) and opt ( S ) respectively. We denote the cost of mismatches to alg for a segment S by mis( S ). Therefore, alg ( W t ) = mis( W t ) + α ≤ λ + α for t ∈ F . Charging Scheme.
We partition the input σ into phases , separated by flips to 1. Precisely σ = P first P . . . P k P last , where P first := σ (0 , t first ] is the subsequence until the first flip, P last := σ ( r m , | σ | ] is the subsequence from the last flip to the last request, and each P i := σ ( r i − , r i ] , ≤ i ≤ m . In a series of lemmas, we analyze the total cost to opt andeventually the competitive ratio for each part separately. Then, we aggregate all individualratios into one competitive ratio in Theorem 8.12.For any phase P i , Lemma 8.4 lower-bounds the number of 0-requests in W l i and showsthe window is contained in P i , given the flip to 0 occurs by Rule 2. ▶ Lemma 8.4.
For any phase P i , if the flip to 0 at l i occurs by Rule 2 then P i − ∩ W l i = ∅ and n ( L i ) > λ . Proof.
By definition, alg flips to 1 at r i − < l i in the phase P i − . Since T ≥ λ (bydefinition), the segment W r i − is contained in the visible horizon at each time r i − , . . . , r i − +3 λ . Thus, alg outputs 1 at every time step r i − , . . . , r i − + 3 λ . Therefore, the flip to 0 byRule 2 occurs earliest at r i − + 3 λ + 1, that is, at l i ≥ r i − + 3 λ + 1. Hence, | L i | = l i − r i − ≥ λ + 1 = | W l i | + 1 . That is, W l i is contained in L i , which implies P i − ∩ W l i = ∅ . Moreover, we have n ( W l i ) ≤ λ − , as otherwise alg would output 1 at l i , contradicting our assumption. Thus, n ( L i ) ≥ n ( W l i ) = | W l i | − n ( W l i ) ≤ λ − (2 λ − ≥ λ + 1 > λ. ◀ Lemma 8.4 implies that if the flip to 0 in P i occurs by Rule 2 then both windows W l i and W r i are contained in P i , i.e., they do not overlap P i − . ▶ Corollary 8.5.
For any phase P i , if the flip to 0 at l i occurs by Rule 2 then both windows W l i and W r i are contained in P i . The next lemma bounds the number of 0-requests in L i and the number of 1-requests in R i ,given the flip at l i occurs by Rule 1. ▶ Lemma 8.6.
For any phase P i , if the flip at l i occurs by Rule 1 then n ( L i ) ≥ λ and n ( R i ) ≥ λ . Proof.
A flip to 1 always occurs by Rule 1 and therefore W r i contains exactly 2 λ many1-requests. Assume for contradiction that x := n ( W r i \ W l i ) < λ . Then the remaining2 λ − x many 1-requests must be in W l i which implies n ( W l i ) ≥ λ − x > λ . Since the flipto 0 is assumed to occur by Rule 1, W l i must contain exactly 2 λ many 0-requests. However,the number of 0-requests in W l i is at most 3 λ − n ( W l i ) ≤ λ − (2 λ − x ) < λ , whichimplies either there is no flip to 0 at l i , or the flip to 0 occurs by Rule 2, contradicting ourassumption. Therefore, n ( R i ) ≥ n ( W r i \ W l i ) ≥ λ . The flip to 0 at l i follows the flip to 1 in the previous phase P i − , which occurs by Rule 1.Then, following a similar argument as used for the flip at r i (involving W r i − ), we conclude n ( L i ) ≥ n ( W l i \ P i − ) ≥ λ . ◀ A segment S is a block segment if | S | = 3 λ . The following lemma lower-bounds costs to opt for any block segment in which alg never flips, or it flips only at the end of the block,i.e., immediately after serving the last request in the block. ▶ Lemma 8.7.
For any block segment S where alg pays the mismatch cost mis( S ) , if alg does not flip in S then opt ( S ) ≥ min { n ( S ) , n ( S ) , α } ≥ mis( S ) / . If alg flips only at endthe of S then opt ( S ) ≥ λ . Proof.
Assume alg outputs b ∈ { , } in S . Then it must hold mis( S ) = n ¬ b ( S ) ≤ λ − ¬ b ) and hence n b ( S ) ≥ λ + 1 ≥ n ¬ b ( S ) / S ) / opt paysmismatches to b -requests or to ¬ b -requests, or it performs a flip and possibly serves some ofthem for free. Thus, regardless of opt ’s choices in S , opt ( S ) ≥ min { n b ( S ) , n ¬ b ( S ) , α } ≥ min { mis( S ) / , mis( S ) , α } and the claim follows since mis( S ) ≤ λ − S ) / < λ ≤ α .Assume that alg flips to 0 at the end of S . Then S contains at most 2 λ many 0-requestsas otherwise the flip to 0 would occur earlier in S . Moreover, S contains at most 2 λ − alg would output 1 at the end of S (by Rule 1). Therefore, S containsat least λ many of either 0- and 1-requests, that is, n b ( S ) , n ¬ b ( S ) ≥ λ . Regardless of opt ’sactions, we have opt ( S ) ≥ min { n b ( S ) , n ¬ b ( S ) , α } ≥ min { λ, α } = λ. The case where alg flips to 1 at the end of the segment is analogous (up to swapping 0’sand 1’s). ◀ Consider any segment U that can be partitioned into a set of blocks B . If alg does notflip in U then applying Lemma 8.7 to each block separately yields opt ( U ) = X B ∈B opt ( B ) ≥ X B ∈B mis( B ) / U ) / . ▶ Corollary 8.8.
For any segment U such that | U | is a multiple of λ , if alg does not flipin U then opt ( U ) ≥ mis( U ) / . The following lemma lower-bounds costs to opt for a segment in which opt does notflip, and alg flips immediately after serving the segment, i.e., at the end of the segment. ▶ Lemma 8.9.
Consider any phase P i and a segment S ∈ { L i , R i } s.t. | S | ≥ λ . alg outputs b ∈ { , } for the entire S and flips to ¬ b at t ∈ { l i , r i } . Consider the partitioning S = UVW t ,where | U | is a multiple of λ , | V | < λ . If opt does not flip in VW t then one of the twocases holds: i) opt serves VW t in state b and opt ( VW t ) = mis( VW t ) ii) opt serves VW t in state ¬ b and opt ( VW t ) ≥ λ . Proof.
We show the claim for the case b = 1. The argument for b = 1 is analogous subjectto swapping 0’s and 1’s. If opt serves the entire S in state b = 1 then both opt and alg pay mismatches to ¬ b ’s in the entire S and opt ( S ) = mis( S ) which concludes Lemma 8.9.i.Otherwise, opt serves the entire S in state ¬ b = 0 and possibly pays no cost for V .Therefore, opt ( S ) ≥ opt ( W t ) ≥ λ , where the last inequality follows from Lemma 8.7,concluding Lemma 8.9.ii. ◀ . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 37 The following lemma lower-bounds opt ’s cost for a segment in which alg does not flipuntil the end of the segment. ▶ Lemma 8.10.
Consider any phase P i and a segment S ∈ { L i , R i } s.t. | S | ≥ λ . alg flipsto b ∈ { , } at t ∈ { l i , r i } after it outputs ¬ b for the entire S . Consider the partitioning S = UVW t , where | U | is a multiple of λ , | V | < λ . If opt flips in VW t then one of the twocases applies: i) opt flips to b in VW t and opt ( VW t ) ≥ min { mis( V ) , λ } + α , ii) opt flips to ¬ b in VW t and opt ( VW t ) ≥ max { mis( VW t ) / − λ, } + α . Proof.
We analyze the case where alg flips to b = 0 at t . The case for b = 1 follows in asimilar way subject to swapping 0’s and 1’s.If opt flips more than once in VW t then opt ( VW t ) ≥ α . Since min { mis( V ) , λ } ≤ λ ≤ α ,Lemma 8.10.i follows from opt ( VW t ) ≥ α ≥ min { mis( V ) , λ } + α . Since mis( VW t ) < λ ,we have mis( VW t ) / − λ < λ ≤ α . Thus, if opt flips more than once in VW t then opt ( VW t ) ≥ α ≥ max { mis( VW t ) − λ, } + α and Lemma 8.10.ii holds. Hence, in theremainder, we assume opt flips only once in VW t .If opt flips in V then it does not flip in W t and by Lemma 8.7, opt ( W t ) ≥ λ . Therefore, opt ( VW t ) = opt ( V ) + opt ( W t ) ≥ α + λ . Next, we provide lower bounds for the cost of opt in VW t by distinguishing the two cases of opt ’s flip, to 0 and to 1. OPT flips to b = 0 in VW t . If opt flips in V then opt ( VW t ) ≥ λ + α ≥ min { mis( V ) , λ } + α. Otherwise, opt serves V in state 1, opt ( V ) = n ( V ) = mis( V ), and it flips to 0 in W t .Therefore, opt ( VW t ) = opt ( V ) + opt ( W t ) ≥ mis( V ) + α ≥ min { mis( V ) , λ } + α, which concludes Lemma 8.10.i. OPT flips to ¬ b = 1 in VW t . If opt flips in V then opt ( VW t ) ≥ λ + α ≥ max { mis( VW t ) / − λ, } + α. Otherwise, opt serves the entire V in state 0. We lower-bound the cost of mismatchesin W t after opt flips to 1, using the following observation. If n ( V ) ≥ λ + 1 many 1-requests between the last 0-request in V and alg ’s flip to 0 at t .Assume this is not the case and n ( W t ) ≤ λ . Therefore n ( W t ) = 3 λ − n ( W t ) ≥ λ and n ( V ) + n ( W t ) ≥ λ + 1, that is, alg flips to 0 earlier than t in W t , contradictingour assumption. Therefore, at least λ + 1 many 1-requests in VW t occur after the last0-request in V . Let σ p = 1 be the ( λ + 1)th 1-request in VW t .Next, we lower-bound the number of mismatches incurred by opt after it flips to 1.Either opt flips to 1 after σ p , that is, after paying at least λ + 1 mismatches to 1-requestsin VW t , or it flips to 1 before serving σ p . In the latter case, opt pays mismatches to theremaining 0-requests (occurring after σ p ) in W t . Let x be the number of 0-requests in VW t after σ p . Then the number of 0-requests in VW t and before σ p is n ( VW t ) − x < λ ,otherwise the flip to 0 would occur earlier than p < t , contradicting our assumption.Summing up all the considered cases, opt pays at least x > n ( VW t ) − λ = mis( VW t ) − λ U l V l W l U r V r W r P i L R r i l i r i − (a) W l U r V r W r P i − L R (b) U l V l W l W r L R (c)
Figure 4 alg flips to 0 at the end of the segment L and flips to 1 at the end of the segment R .In 4a, both segments L and R are long. As a result, both windows W l and W r are contained in thephase. In 4b and 4c, one part is short and the other is long. The window W l overlaps the phase P i − in 4b, and it overlaps W r (in thick blue) in 4c. mismatches to 0-requests after it flips to 1, and opt ( VW t ) ≥ max { mis( VW t ) − λ, } + α ≥ max { mis( VW t ) / − λ, } + α, which concludes Lemma 8.10.ii. ◀▶ Lemma 8.11.
For any phase P i , we have alg ( P i ) / opt ( P i ) ≤ αλ . Proof.
Recall that in the phase P i , alg flips to 0 at l := l i and later to 1 at r := r i . Webound the costs to opt and alg in L := L i and R := R i separately, which is followed bythe competitive ratio for P i = LR . Thus, alg ( P i ) = alg ( L ) + alg ( R ) and opt ( P i ) = opt ( L ) + opt ( R ). If L is long, that is | L | ≥ λ , we consider the partitioning L = U l V l W l where | U l | is a multiple of 3 λ and | V l | < λ . Similarly, R = U r V r W r where | U r | is a multipleof 3 λ and | V r | < λ .To obtain our upper bounds, we use the fact P j a j / P j b j ≤ max j a j /b j for a j , b j ≥ | L | and | R | . Both parts are short.
Since | L | < λ = | W l | , L is a subsegment of W l which implies P i − ∩ W l ̸ = ∅ . The latter, together with Lemma 8.4 implies that the flip to 0 at l occurs byRule 1. Then, Lemma 8.6 guarantees at least λ many 0-requests in L and at least λ many1-requests in R are contained. Therefore, regardless of whether opt performs a flip in P i ornot, it pays opt ( P i ) ≥ min { λ, α } = λ . Using mis( W l ) , mis( W r ) ≤ λ + α , we obtain alg ( P i ) opt ( P i ) = alg ( L ) + alg ( R ) opt ( P i ) ≤ mis( W l ) + mis( W r ) + 2 α opt ( P i ) ≤ λ + 2 αλ . (1) Both parts are long.
See Figure 4a for an illustration. Regardless of opt ’s actions in L , we have opt ( U l ) ≥ mis( U l ) / opt ( W t ) ≥ λ form Lemma 8.7.Therefore, opt ( L ) = opt ( U l ) + opt ( V l W l ) ≥ mis( U l ) / λ . Similarly for R , we have opt ( R ) ≥ mis( U r ) / λ . Thus, opt ( P i ) = opt ( L ) + opt ( R ) ≥ mis( U l ) / U r ) / λ. . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 39 For alg ’s mismatch cost, we have mis( V l W l ) = mis( V l ) + mis( W l ) ≤ λ − λ < λ andsimilarly mis( V r W r ) < λ . Then, alg ( P i ) = alg ( L ) + alg ( R ) = mis( U l ) + mis( V l W l ) + α + mis( U r ) + mis( V r W r ) + α ≤ mis( U l ) + mis( U r ) + 8 λ + 2 α , and alg ( P i ) opt ( P i ) ≤ mis( U l ) + mis( U r ) + 8 λ + 2 α mis( U l ) / U r ) / λ ≤ λ + 2 α λ = 4 + αλ . (2) Only the right part is long.
Then, L is a subsegment of W l (see Figure 4b) and therefore alg ( L ) = mis( W l ) + α ≤ λ + α For R , we distinguish several cases. Case 1.1. opt enter V r W r in state 0 and serves it in state 0. In this case, Lemma 8.9.i applieswhich together with Corollary 8.8 yields opt ( R ) = opt ( U r ) + mis( V r ) + mis( W r ) ≥ mis( U r ) / V r ) + λ . Thus, we have, alg ( P i ) = alg ( L ) + alg ( R ) = (2 λ + α ) +mis( U r ) + mis( V r ) + mis( W r ) + α ≤ mis( U r ) + mis( V r ) + 4 λ + 2 α and thereby alg ( P i ) opt ( P i ) ≤ mis( U r ) + mis( V r ) + 4 λ + 2 α mis( U r ) / V r ) + λ ≤ λ + 2 αλ = 4 + 2 αλ . (3) Case 1.2. opt enters V r W r in state 0 and flips to 1 in V r W r . In this case, Lemma 8.10.iapplies which together with Corollary 8.8 yields opt ( R ) = opt ( U r ) + opt ( V r W r ) ≥ mis( U ) / { mis( V r ) , λ } + α . By distinguishing the two cases mis( V r ) < λ andmis( V r ) ≥ λ , and using alg ( P i ) = alg ( L ) + alg ( R ) ≤ (2 λ + α ) + mis( U r ) + mis( V r ) +2 λ + α , we obtain alg ( P i ) opt ( P i ) ≤ mis( U r ) + mis( V r ) + 4 λ + 2 α mis( U r ) / { mis( V r ) , λ } + α ≤ max { λ + 2 αλ , λ + 2 αλ + α } = 4 + 2 αλ . (4) Case 1.3. opt enters V r W r in state 1. If opt serves it in state 1 then from Lemma 8.7, wehave opt ( V r W r ) ≥ opt ( W r ) ≥ λ . Else, it flips to 0 and opt ( V r W r ) ≥ α ≥ λ . Therefore,in either case, opt ( V r W r ) ≥ λ . Since P i − ∩ W l ̸ = ∅ , Lemma 8.4 implies that the flipto 0 at l occurs by Rule 1. Then, from Lemma 8.6, we have n ( L ) = n ( W l \ P i − ) ≥ λ .Next, we lower-bound opt ’s cost in P i before entering V r W r (i.e. in LU r ). Either opt serves the entire L in state 1 and opt ( L ) = n ( L ) ≥ λ , or it flips in L at cost α ≥ λ . FromLemma 8.6, we have opt ( U r ) ≥ mis( U r ) /
2, which yields opt ( P i ) ≥ opt ( L ) + opt ( U r ) + opt ( V r W r ) ≥ λ + mis( U r ) / λ. Using alg ( P i ) = alg ( L ) + alg ( R ) ≤ (2 λ + α ) + mis( U r ) + mis( V r W r ) + α, we obtain alg ( P i ) opt ( P i ) ≤ mis( U r ) + 6 λ + 2 α mis( U r ) / λ ≤ λ + 2 α λ = 3 + αλ . (5) Only the left part is long.
See Figure 4c for an illustration. We distinguish cases of opt ’sstate when in enters V l W l . Note that alg ’s flip to 0 at l possibly occurs by Rule 2. Case 2.1. opt enters V l W l in state 1 and serves it in state 1. This case is symmetric toCase 1.1 and the upper bound (3) holds analogously, after swapping the usage of R and L , as well as 0’s and 1’s. Case 2.2. opt enters V r W r in state 1 and flips to 0 later in this segment. This case issymmetric to Case 1.2 and the upper bound (4) holds analogously, after swapping theusage of R and L , as well as 0’s and 1’s. Case 2.3. opt enters V l W l in state 0. Recall that if the flip to 0 is by Rule 2 then Lemma 8.6does not apply and possibly n ( R ) = n ( W r \ W l ) < λ . However, since the flip to 1 at r is by Rule 1, we have n ( W r ) = 2 λ , and since W r is a subsegment of W l R , we have n ( W l R ) ≥ n ( W r ) ≥ λ . Since mis( V l W l ) < λ , we have mis( V l W l ) / − λ < λ andhencemax { mis( V l W l ) / − λ, } + λ < λ ≤ n ( W l R ) . Either opt serves the entire segment V l W l R in state 0 and opt ( V l W l R ) ≥ n ( W l R ) ≥ λ ≥ max { mis( V l W l ) / − λ, } + λ, or it flips to 1 in this segment. If opt flips to 1 in R then by Lemma 8.6, opt ( V l W l R ) ≥ opt ( W l ) + opt ( R ) ≥ λ + α ≥ λ ≥ max { mis( V l W l ) / − λ, } . Else, it flips in V l W l ; by Lemma 8.10.ii, opt ( V l W l ) ≥ max { mis( V l W l ) / − λ, } + λ. Thus, in any case where opt enters V l W l in state 0, we have opt ( V l W l ) ≥ max { mis( V l W l ) / − λ, } + λ. By applying Corollary 8.8 to U l , we obtain opt ( P i ) ≥ opt ( U l ) + opt ( V l W l R ) ≥ mis( U l ) / { mis( V l W l ) / − λ, } + λ. (6)If mis( V l W l ) < λ then (6) reduces to opt ( P i ) ≥ mis( U r ) / λ . Using alg ( P i ) ≤ mis( U l ) + mis( V l W l ) + α + (2 λ + α ) ≤ mis( U l ) + 4 λ + 2 α, we obtain alg ( P i ) opt ( P i ) ≤ mis( U r ) + 4 λ + 2 α mis( U r ) / λ ≤ λ + 2 αλ = 4 + 2 αλ . (7)Else, mis( V l W l ) ≥ λ holds and (6) reduces to opt ( V l W l ) ≥ (mis( V l W l ) / − λ ) + λ. Let z := mis( V l W l ) / − λ . Then, mis( V l W l ) = 2 z + 2 λ , alg ( P i ) ≤ mis( U l ) + mis( V l W l ) + 2 λ + 2 α = mis( U l ) + (2 z + 2 λ ) + 2 λ + 2 α, and alg ( P i ) opt ( P i ) ≤ mis( U r ) + 2 z + 4 λ + 2 α mis( U r ) / z + λ ≤ λ + 2 αλ = 4 + 2 αλ . (8)From all upper bounds (1)–(8), we conclude alg ( P i ) / opt ( P i ) ≤ αλ . ◀ . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 41 ▶ Theorem 8.12.
For any input sequence σ , any T ≥ and ≤ λ ≤ α , we have alg ( σ ) ≤ (cid:16) αλ (cid:17) opt ( σ ) + 6 α. Proof.
Assume alg performs at least one flip in σ . Recall that in this case the input sequence σ is partitioned as σ = P first P . . . P m P last , where P first is the subsequence until the first flipto 1, P last is the subsequence between the last flip to 1 and the end of the sequence, andeach P i is the subsequence between two consecutive flips to 1. From Lemma 8.11, we have alg ( P i ) ≤ (4 + 2 α/λ ) opt ( P i ). In the remainder, we upper bound the ratio separately for P first and P last , as well as the case where alg never flips.We begin with the first part of the input, P first = σ (0 , t first ]. Recall that alg startsserving σ by outputting 0 until it flips to 1 at t first for the first time. We distinguish twocases for P first . P first is short. In this case, alg ( P first ) ≤ λ + α . opt begins in state 0 and either paysmis( W first ) = 2 λ mismatches to 1-requests in W t first or it performs a flip to 1. In any case of opt ’s actions in P first , by distinguishing the two cases α < λ and α ≥ λ , we obtain alg ( P first ) opt ( P first ) ≤ mis( W t first ) + α min { λ, α } ≤ λ + α min { λ, α } ≤ max n , αλ o . (9) P first is long. Consider the partitioning P first = UVW , where | U | is a multiple of 3 λ , | V | < λ and W = W t first . If opt serves the entire VW in one state (either 0 or 1) then opt ( VW ) ≥ opt ( W ) ≥ λ (from Lemma 8.7). Otherwise, opt flips in VW and opt ( VW ) ≥ α ≥ λ . Afterapplying Lemma 8.8 to U and using alg ( P first ) = mis( U ) + mis( VW ) + α ≤ mis( U ) + 4 λ + α ,we obtain alg ( P first ) opt ( P first ) ≤ mis( U ) + mis( V ) + 4 λ + α mis( U ) / λ ≤ λ + αλ = 4 + αλ . (10)Lastly, we bound costs for P last = σ ( r m , n ] as follows. If | P last | < λ then alg pays up to3 λ mismatches and possibly performs a last flip (to 0) at t m ≤ | σ | . Using λ ≤ α . we obtain alg ( P last ) ≤ | P last | + α ≤ λ + α ≤ α ,Else, | P last | ≥ λ . Consider the partitioning P last = U ′ V ′ where | U ′ | is a multiple of 3 λ and | V ′ | < λ . alg possibly flips to 0 one last time at t last , in a segment S that is either the segment V ′ or a block of U ′ . In either case, mis( S ) ≤ | S | ≤ λ and alg ( S ) = mis( S )+ α ≤ λ + α ≤ α .Using mis( V ′ ) < λ ≤ α and Corollary 8.8, we obtain alg ( P last ) = alg ( U ′ \ S ) + alg ( S ) + mis( V ′ ) ≤ alg ( U ′ \ S ) + 6 α. (11)By an argument similar to that of P last , the bound (11) holds also for the case alg neverflips in σ , as alg does not incur any flipping cost. Combining our bounds.
From the upper bounds (9), (10), (11), and by applying Lemma 8.11to each phase P i , we conclude alg ( σ ) ≤ (4 + αλ ) opt ( σ ) + 6 α , where the additive is aconsequence of (11). ◀ Since λ = min { T / , α } , we obtain the following ratios for small and large values of T separately. ▶ Corollary 8.13.
The time-local algorithm alg is c -competitive, where c = 6 for T ≥ α , c = 4 + αT for ≤ T ≤ α , and c = 4 + 2 α for ≤ T ≤ . Alternatively to using request-answer games, some online problems are often more convenientto formulate as task systems [14]. A task system consists of a set of configurations, transitioncosts between configurations and the processing cost of each type of request in every possibleconfiguration. In this case, the output is a sequence of configurations chosen by the algorithm,and the cost of a solution is the sum of all transition and processing costs incurred. Anonline problem given by a task system does not in general have a unique encoding as arequest-answer game, and vice versa. In this work we operate in the request-answer gameframework, and unless otherwise specified, we typically assume that algorithms output thecurrent configuration, that is, the set Y of output values coincides with (some encoding of)the configurations.The lower bound discussed in Section 8.1 assumes a particular encoding of the online filemigration as a request-answer game. Namely, the answer set is coincident with the set ofnodes in the network, and is equivalent to the algorithm’s configuration (placement of thefile).Although this encoding is natural, it causes certain types of problems — when facedwith a visible horizon with requests scattered over various nodes, without a clear majority,it still must uniquely determine the file location. Dealing with such situations involves e.g.distinguishing a default configuration in case there’s no clear majority in the visible horizon,to avoid excessive file migration.If we consider a different set of answers, the competitive ratio of the problem may beimproved. This is in contrast to the classic algorithms setting, where the competitive ratio isindifferent to the problem encoding.Consider an additional answer “do not move the file”, denoted SKIP, that instructs thefile to stay in it its current location. Note that this answer is configuration-dependent, andwe may need to track an arbitrarily long sequence of answers to determine the actual filelocation. For this reason, it is impossible to encode the online problem as a local problem(cf. Section 3.1).With this answer set, the lower bound from Section 8.1 no longer holds. In the remainderof this section, we adapt a classic randomized algorithm by Westbrook [44] to the time-localsetting, and we show that it is 3-competitive against the adaptive online adversary even with T = 1, i.e., the access to the last request is sufficient.Let the Behavioral Coin Flip algorithm be defined as follows. Upon receiving a requestfrom any node, we move the file to this node with probability 1 / (2 α ), and with probability1 − / (2 α ) we keep the file in its previous location. ▶ Theorem 8.14.
The Behavioral Coin Flip algorithm with T = 1 is -competitive againstthe adaptive online adversary for online file migration encoded with the answer set includingSKIP. Furthermore, no algorithm can obtain a competitive ratio below against the adaptiveonline adversary. An elegant proof of the first part of the theorem was given by Westbrook [44]. The secondpart follows by adapting the lower bound of 3 for deterministic algorithms [6] to the adaptiveonline setting, and uses an important technique of averaging the cost of 3 offline algorithms.
The lower bounds for classic online algorithms imply lower bounds for time-local algorithms.We first review classic results and algorithms for online file migration, and then provideinsights into the case α < α ≥ . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 43 We study a variant of online file migration in networks consisting of 2 nodes [9]. For thisproblem, a 3-competitive deterministic algorithm can be obtained using a work functionalgorithm for metrical task system [14]. The result is tight: a lower bound of 3 holds evenfor 2 node networks [11], and it uses the technique of averaging costs of multiple offlinealgorithms, introduced in [32]. In the randomized setting, the threshold work functionalgorithm obtains the competitive ratio that approaches e − e − ≈ .
581 as the length of theinput sequence grows [30].More generally, it is known that online file migration in arbitrary networks admits a4-competitive deterministic algorithm [10]. The best known lower bound for deterministicalgorithms is 3 + Ω(1) and requires 4 nodes [34]. Randomized (1 + ϕ )-competitive and3-competitive algorithms exist against the oblivious offline adversary and the adaptive onlineadversary, respectively [44], where ϕ ≈ . α exists.Next, we present a lower bound that holds for the classic variant, and hence for thetime-local setting as well. ▶ Theorem 8.15.
Consider any deterministic online algorithm A for online file migrationwith file size α . If A is c -competitive, then c ≥ /α for α ∈ (0 , / , and c ≥ min { α, / (2 α ) } for α ∈ (1 / , . Proof.
Consider an input sequence σ L for any L ∈ N , constructed in the following way. Westart by issuing 1-requests until A migrates the file to node 1. Then, we proceed by issuing0-requests until A migrates the file to the node 0. We repeat these steps L times. Note that A must eventually perform a migration, otherwise it is not competitive (an optimal offlinealgorithm pays at most 2 α · L for σ L ). In the remainder of the proof, we assume that A eventually performs a migration, and consequently σ L is finite.We partition σ L into phases P , . . . , P L in the following way. The first phase begins withthe first request and each phase ends when alg migrates the file to 0. We analyze the ratioof A to opt on each phase separately. For any i ≤ L , consider the i th phase P := P i . Let x be the number of 1-requests and y be the number of 0-requests in P . Then x, y ≥ x + y ≥
2. Recall that A first serves a request and then decides whether to migrate the fileor not. Consequently, it incurs the cost 2 in each phase for serving requests remotely, andperforms two migrations, and its total cost is x + y + 2 α ≥ α .Let opt be any optimal offline solution. Note that opt never pays more than 2 α in anyphase: it can always migrate the file to 1 prior to serving all 1-requests for free, and thento 0 prior to serving all 0-requests for free. Thus, for any α >
0, we have alg ( σ L ) opt ( σ L ) ≥ min i alg ( P i ) opt ( P i ) ≥ (2 + 2 α ) / α = 1 /α + 1 . Next, we provide a stronger bound when 1 / < α <
1, by distinguishing two cases.
Case 1. opt has the file at the node 0 when it enters the phase. If x = 1 then opt doesnot benefit by migrating the file, as otherwise, it would incur for 0-requests in additionto α . Therefore, it pays 1 for serving the (single) 1-request remotely, and the ratio is(2 + 2 α ) /
1. Else, x ≥ > α , and alg pays x + y + 2 α ≥ α . Since opt never paysmore than 2 α for any phase, we have alg ( P ) / opt ( P ) ≥ (3 + 2 α ) / α = 3 / α + 1. Case 2. opt has the file at the node 1 when it enters the phase. opt serves all 1-requestsin the phase for free. If y = 1, then either opt serves the 0-request remotely withoutmigrating the file, paying 1, or it migrates the file and pays α <
1. In either case, it pays at most 1 and alg ( P ) / opt ( P ) ≥ (2+ 2 α ) /
1. Else, y ≥
2, and opt migrates the file to thenode 0 before serving the 0-requests; as otherwise it would incur y ≥ > α , more thanmigrating the file twice. Since x + y ≥
3, we have alg ( P ) / opt ( P ) ≥ (3+2 α ) /α = 3 /α +2.Hence, in all cases for α ∈ (1 / , alg ( P ) / opt ( P ) ≥ min { α, / (2 α ) } , andconsequently for all inputs σ L for any L ∈ N we have alg ( σ L ) opt ( σ L ) ≥ min i alg ( P i ) opt ( P i ) ≥ min { α, / (2 α ) } . By combining the results for α ∈ (0 , /
2] and α ∈ (1 / , ◀ A lower bound of 3 is presented in [11] for α ≥
1, and we note that it holds also for α < ▶ Corollary 8.16.
No deterministic classic online algorithm for online file migration canachieve a competitive ratio less than max { , /α } , for α > , or less than min { α, / (2 α ) } for α ∈ (0 . , . In this work, we initiated the study of time-local online algorithms, and its connections withdistributed graph algorithms. We used a special case of the online file migration problem asa running example. We saw that even this simple problem already exhibits a wide rangeof behaviors in the time-local setting, and identified new questions that need further study,also for classic online algorithms; among them is the analysis of the problem for the range1 / < α < LOCAL model and itsslightly weaker variant, numbered
LOCAL model, introduced here, and the capabilities ofthese models in distributed optimization are not fully understood yet.
References Dana Angluin. Local and global properties in networks of processors. In
Proc. 12th AnnualACM Symposium on Theory of Computing (STOC 1980) , 1980. doi:10.1145/800141.804655 . Hagit Attiya, Marc Snir, and Manfred K. Warmuth. Computing on an anonymous ring.
Journal of the ACM , 35(4):845–875, 1988. doi:10.1145/48014.48247 . Yossi Azar, Andrei Z. Broder, and Anna R. Karlin. On-line load balancing.
TheoreticalCompuer Science , 130(1):73–84, 1994. doi:10.1016/0304-3975(94)90153-8 . Alkida Balliu, Sebastian Brandt, Yi-Jun Chang, Dennis Olivetti, Mikaël Rabie, and JukkaSuomela. The distributed complexity of locally checkable problems on paths is decidable. In
Proc. 2019 ACM Symposium on Principles of Distributed Computing (PODC 2019) , pages262–271, 2019. doi:10.1145/3293611.3331606 . Alkida Balliu, Sebastian Brandt, Yuval Efron, Juho Hirvonen, Yannic Maus, Dennis Olivetti,and Jukka Suomela. Classification of distributed binary labeling problems. In
Proc. 34thInternational Symposium on Distributed Computing (DISC 2020) , pages 17:1–17:17, 2020. doi:10.4230/LIPIcs.DISC.2020.17 . . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 45 Yair Bartal, Amos Fiat, and Yuval Rabani. Competitive algorithms for distributed datamanagement.
Journal of Computer and System Sciences , 51(3):341–358, 1995. doi:10.1006/jcss.1995.1073 . Sanjoy Baruah, Gilad Koren, Decao Mao, Bhubaneswar Mishra, Arvind Raghunathan, LouisRosier, Dennis Shasha, and Fuxing Wang. On the competitiveness of on-line real-time taskscheduling.
Real Time Systems , 4(2):125–144, 1992. doi:10.1007/BF00365406 . Shai Ben-David, Allan Borodin, Richard M. Karp, Gábor Tardos, and Avi Wigderson. Onthe power of randomization in on-line algorithms.
Algorithmica , 11(1):2–14, 1994. doi:10.1007/BF01294260 . Marcin Bienkowski. Migrating and replicating data in networks.
Computer Science-Researchand Development , 27(3):169–179, 2012. Marcin Bienkowski, Jaroslaw Byrka, and Marcin Mucha. Dynamic beats fixed: On phase-based algorithms for file migration.
ACM Transactions on Algorithms , 15(4):46:1–46:21, 2019. doi:10.1145/3340296 . David L. Black and Daniel D. Sleator. Competitive algorithms for replication and migrationproblems. Technical Report CMU-CS-89-201, Carnegie Mellon University, 1989. URL: . Paolo Boldi and Sebastiano Vigna. An effective characterization of computability in anonymousnetworks. In
Proc. 15th International Symposium on Distributed Computing (DISC 2001) ,pages 33–47, 2001. doi:10.1007/3-540-45414-4_3 . Allan Borodin and Ran El-Yaniv.
Online Computation and Competitive Analysis . CambridgeUniversity Press, 1998. Allan Borodin, Nathan Linial, and Michael E. Saks. An optimal on-line algorithm for metricaltask system.
Journal of the ACM , 39(4):745–763, 1992. doi:10.1145/146585.146588 . Sebastian Brandt, Juho Hirvonen, Janne H Korhonen, Tuomo Lempiäinen, Patric RJ Östergård,Christopher Purcell, Joel Rybicki, Jukka Suomela, and Przemysław Uznański. Lcl problemson grids. In
Proc. 2017 ACM Symposium on Principles of Distributed Computing (PODC2017) , pages 101–110, 2017. doi:10.1145/3087801.3087833 . Yi-Jun Chang and Seth Pettie. A time hierarchy theorem for the local model.
SIAM Journalon Computing , 48(1):33–69, 2019. Yi-Jun Chang, Jan Studen`y, and Jukka Suomela. Distributed graph problems through anautomata-theoretic lens, 2020. To appear in SIROCCO 2021. URL: https://arxiv.org/abs/2002.07659 . Krishnendu Chatterjee, Andreas Pavlogiannis, Alexander Kößler, and Ulrich Schmid. Auto-mated competitive analysis of real-time scheduling with graph games.
Real Time Systems ,54(1):166–207, 2018. doi:10.1007/s11241-017-9293-4 . Marek Chrobak and Lawrence L. Larmore. The server problem and on-line games. In
On-LineAlgorithms, Proceedings of a DIMACS Workshop, New Brunswick, New Jersey, USA, February11-13, 1991 , volume 7 of
DIMACS Series in Discrete Mathematics and Theoretical ComputerScience , pages 11–64. DIMACS/AMS, 1991. doi:10.1090/dimacs/007/02 . Marek Chrobak, Lawrence L. Larmore, Nick Reingold, and Jeffery Westbrook. Page migrationalgorithms using work functions.
Journal of Algorithms , 24(1):124–157, 1997. doi:10.1006/jagm.1996.0853 . Richard Cole and Uzi Vishkin. Deterministic coin tossing with applications to optimal parallellist ranking.
Information and Control , 70(1):32–53, 1986. doi:10.1016/S0019-9958(86)80023-7 . Don Coppersmith, Peter Doyle, Prabhakar Raghavan, and Marc Snir. Random walks onweighted graphs and applications to on-line algorithms.
Journal of the ACM , 40(3):421–453,1993. doi:10.1145/174130.174131 . Andrzej Czygrinow, Michał Hańćkowiak, and Wojciech Wawrzyniak. Fast distributed ap-proximations in planar graphs. In
Proc. 22nd International Symposium on Distributed
Computing (DISC 2008) , pages 78–92, 2008. doi:10.1007/978-3-540-87779-0_6 . doi:10.1007/978-3-540-87779-0_6 . Edsger W. Dijkstra. Self-stabilizing systems in spite of distributed control.
Communicationsof the ACM , 17(11):643–644, 1974. doi:10.1145/361179.361202 . Shlomi Dolev.
Self-Stabilization . MIT Press, 2000. Klaus-Tycho Foerster, Juho Hirvonen, Jukka Suomela, and Stefan Schmid. On the powerof preprocessing in decentralized network optimization. In
Proc. 28th IEEE Conference onComputer Communications (INFOCOM 2019) , 2019. doi:10.1109/INFOCOM.2019.8737382 . Mika Göös, Juho Hirvonen, and Jukka Suomela. Lower bounds for local approximation.
Journal of the ACM , 60(5), 2013. doi:10.1145/2528405 . Juho Hirvonen, Joel Rybicki, Stefan Schmid, and Jukka Suomela. Large cuts with localalgorithms on triangle-free graphs.
The Electronic Journal of Combinatorics , 24(4):P4.21,2017. doi:10.37236/6862 . Takashi Horiyama, Kazuo Iwama, and Jun Kawahara. Finite-state online algorithms and theirautomated competitive analysis. In
Proc. 17th International Conference on Algorithms andComputation (ISAAC 2006) , pages 71—80, 2006. doi:10.1007/11940128_9 . Sandy Irani and Steve Seiden. Randomized algorithms for metrical task systems.
TheoreticalComputer Science , 194(1):163–182, 1998. doi:10.1016/S0304-3975(97)00006-6 . Kazuo Iwama and Shiro Taketomi. Removable online knapsack problems. In
Proc. 29thInternational Colloquium on Automata, Languages, and Programming (ICALP 2002) , pages293–305, 2002. doi:10.1007/3-540-45465-9_26 . Anna R. Karlin, Mark S. Manasse, Larry Rudolph, and Daniel D. Sleator. Competitive snoopycaching.
Algorithmica , 3:77–119, 1988. doi:10.1007/BF01762111 . Nathan Linial. Locality in distributed graph algorithms.
SIAM Journal on Computing ,21(1):193–201, 1992. doi:10.1137/0221015 . Akira Matsubayashi. A 3+Ω(1) lower bound for page migration.
Algorithmica , 82(9):2535–2563,2020. doi:10.1007/s00453-020-00696-5 . Moni Naor. A lower bound on probabilistic algorithms for distributive ring coloring.
SIAMJournal on Discrete Mathematics , 4(3):409–412, 1991. doi:10.1137/0404036 . Moni Naor and Larry Stockmeyer. What can be computed locally?
SIAM Journal onComputing , 24(6):1259–1277, 1995. A. Pavlogiannis, N. Schaumberger, U. Schmid, and K. Chatterjee. Precedence-aware automatedcompetitive analysis of real-time scheduling.
IEEE Transactions on Computer-Aided Design ofIntegrated Circuits and Systems , 39(11):3981–3992, 2020. doi:10.1109/TCAD.2020.3012803 . Marshall C. Pease, Robert E. Shostak, and Leslie Lamport. Reaching agreement in thepresence of faults.
Journal of the ACM , 27(2):228–234, 1980. doi:10.1145/322186.322188 . David Peleg.
Distributed Computing: A Locality-Sensitive Approach . Society for Industrialand Applied Mathematics, 2000. doi:10.1137/1.9780898719772 . Joel Rybicki and Jukka Suomela. Exact bounds for distributed graph colouring. In
Proc.22nd International Colloquium on Structural Information and Communication Complexity(SIROCCO 2015) , pages 46–60, 2015. doi:10.1007/978-3-319-25258-2_4 . Stefan Schmid and Jukka Suomela. Exploiting locality in distributed SDN control. In
Proc.2nd ACM SIGCOMM Workshop on Hot Topics in Software Defined Networking (HotSDN2013) , pages 121–126, 2013. doi:10.1145/2491185.2491198 . Daniel D. Sleator and Robert E. Tarjan. Amortized efficiency of list update and paging rules.
Communications of the ACM , 28(2):202–208, 1985. doi:10.1145/2786.2793 . Jukka Suomela. Survey of local algorithms.
ACM Computing Surveys , 45(2), 2013. doi:10.1145/2431211.2431223 . Jeffery Westbrook. Randomized algorithms for multiprocessor page migration.
SIAM Journalon Computing , 23(5):951–966, 1994. doi:10.1137/S0097539791199796 . . Pacut, M. Parham, J. Rybicki, S. Schmid, J. Suomela, and A. Tereshchenko 47 Masafumi Yamashita and Tsunehiko Kameda. Computing on anonymous networks: partI—characterizing the solvable cases.
IEEE Transactions on Parallel and Distributed Systems ,7(1):69–89, 1996. doi:10.1109/71.481599doi:10.1109/71.481599