Cover Time in Edge-Uniform Stochastically-Evolving Graphs
aa r X i v : . [ c s . D C ] J u l Cover Time in Edge-Uniform Stochastically-Evolving Graphs ∗ Ioannis Lamprou, Russell Martin, Paul Spirakis { Ioannis.Lamprou, Russell.Martin, P.Spirakis } @liverpool.ac.uk Department of Computer Science,University of Liverpool, UKJuly 19, 2018
Abstract
We define a general model of stochastically-evolving graphs, namely the
Edge-UniformStochastically-Evolving Graphs . In this model, each possible edge of an underlying general staticgraph evolves independently being either alive or dead at each discrete time step of evolutionfollowing a (Markovian) stochastic rule. The stochastic rule is identical for each possible edgeand may depend on the past k ≥ Random Walkwith a Delay ( RWD ), where at each step the agent chooses (uniformly at random) an incidentpossible edge, i.e., an incident edge in the underlying static graph, and then it waits till the edgebecomes alive to traverse it. (ii) The more natural
Random Walk on what is Available ( RWA )where the agent only looks at alive incident edges at each time step and traverses one of themuniformly at random. Our study is on bounding the cover time , i.e., the expected time untileach node is visited at least once by the agent. For
RWD , we provide a first upper bound for thecases k = 0 , RWD with a simple random walk on a static graph. Moreover, wepresent a modified electrical network theory capturing the k = 0 case. For RWA , we derive somefirst bounds for the case k = 0, by reducing RWA to an
RWD -equivalent walk with a modifieddelay. Further, we also provide a framework, which is shown to compute the exact value of thecover time for a general family of stochastically-evolving graphs in exponential time. Finally,we conduct experiments on the cover time of
RWA in Edge-Uniform graphs and compare theexperimental findings with our theoretical bounds.
In the modern era of Internet, modifications in a network topology can occur extremely frequentlyand in a disorderly way. Communication links may fail from time to time, while connectionsamongst terminals may appear or disappear intermittently. Thus, classical (static) network theoryfails to capture such ever-changing processes. In an attempt to fill this void, different researchcommunities have given rise to a variety of theories on dynamic networks . In the context ofalgorithms and distributed computing, such networks are usually referred to as temporal graphs [20]. A temporal graph is represented by a (possibly infinite) sequence of subgraphs of the samestatic graph. That is, the graph is evolving over a series of (discrete) time steps under a set ofdeterministic or stochastic rules of evolution. Such a rule can be edge- or graph-specific and maytake as input graph instances observed in previous time steps. ∗ This work was supported by the University of Liverpool, EEE/CS School, NeST Initiative n this paper, we focus on stochastically-evolving temporal graphs. We define a model ofevolution, where there exists a single stochastic rule, which is applied independently to each edge.Furthermore, our model is general in the sense that the underlying static graph is allowed to be ageneral connected graph, i.e., with no further constraints on its topology, and the stochastic rulecan include any finite number of past observations.Assume now that a single mobile agent is placed on an arbitrary node of a temporal graphevolving under the aforementioned model. Next, the agent performs a simple random walk; at eachtime step, after the graph instance is fixed according to the model, the agent chooses uniformly atrandom a node amongst the neighbors of its current node and visits it. The cover time of such awalk is defined as the expected number of time steps until the agent has visited each node at leastonce. Herein, we prove some first bounds on the cover time for a simple random walk as definedabove, mostly via the use of Markovian theory.Random walks constitute a very important primitive in terms of distributed computing. Exam-ples include their use in information dissemination [1] and random network structure [4]; also, seethe short survey in [8]. In this work, we consider a single random walk as a fundamental buildingblock for other more distributed scenarios to follow.
A paper very relevant to ours is the one of Clementi, Macci, Monti, Pasquale and Silvestri [10],where they consider the flooding time in
Edge-Markovian dynamic graphs. In such graphs, eachedge independently follows a one-step Markovian rule and their model appears as a special case ofours (matches our case k = 1). Further work under this Edge-Markovian paradigm includes [5, 11].Another work related to our paper is the one of Avin, Kouck´y and Lotker [3], who definethe notion of a Markovian Evolving Graph , i.e., a temporal graph evolving over a set of graphs G , G , . . . , where the process transits from G i to G j with probability p ij , and consider randomwalk cover times. Note that their approach becomes computationally intractable if applied to ourcase; each of the possible edges evolves independently, thence causing the state space to be of size2 m , where m is the number of possible edges in our model.Clementi, Monti, Pasquale and Silvestri [12] study the broadcast problem, when at each timestep the graph is selected according to the well-known G n,p model. Also, Yamauchi, Izumi andKamei [25] study the rendezvous problem for two agents on a ring, when each edge of the ringindependently appears at every time step with some fixed probability p .Moving to a more general scope, research in temporal networks is of interdisciplinary interest,since they are able to capture a wide variety of systems in physics, biology, social interactionsand technology. For a view of the big picture, see the review in [19]. There exist several papersconsidering, mostly continuous-time, random walks on different models of temporal networks: In[23], they consider a walker navigating randomly on some specific empirical networks. Rocha andMasuda [22] study a lazy version of a random walk, where the walker remains to its current nodeaccording to some sojourn probability. In [16], they study the behavior of a continuous time randomwalk on a stationary and ergodic time varying dynamic graph. Lastly, random walks with arbitrarywaiting times are studied in [13], while random walks on stochastic temporal networks are surveyedin [18].In the analysis to follow, we employ several seminal results around the theory of random walksand Markov chains. For random walks, we base our analysis on the seminal work in [1] and theelectrical network theory presented in [9, 14]. For results on Markov chains, we cite textbooks[17, 21]. 2 .2 Our Results We define a general model of stochastically-evolving graphs, where each possible edge evolvesindependently, but all of them evolve following the same stochastic rule. Furthermore, the stochasticrule may take into account the last k states of a given edge. The motivation for such a model liesin several practical examples from networking where the existence of an edge in the recent pastmeans it is likely to exist in the near future, e.g., for telephone or Internet links. In some othercases, existence may mean that an edge has ”served its purpose” and is now unlikely to appear inthe near future, e.g., due to a high maintenance cost. The model is a discrete-time one followingprevious work in the computer science literature. Moreover, as a first start and for mathematicalconvenience, it is formalized as a synchronous system, where all possible edges evolve concurrentlyin distinct rounds (each round corresponding to a discrete time step).Special cases of our model have appeared in previous literature, e.g., in [12, 25] for k = 0 andin the line of work starting from [10] for k = 1, however they only consider special graph topologies(like ring and clique). On the other hand, the model we define is general in the sense that noassumptions, aside from connectivity, are made on the topology of the underlying graph and anyamount of history is allowed into the stochastic rule. Thence, we believe it can be valued as a basisfor more general results to follow capturing search or communication tasks in such dynamic graphs.We hereby provide the first known bounds relative to the cover time of a simple random walktaking place in such stochastically evolving graphs for k = 0. To do so, we make use of a simple,yet fairly useful, modified random walk, namely the Random Walk with a Delay ( RWD ), whereat each time step the agent is choosing uniformly at random from the incident edges of the staticunderlying graph and then waits for the chosen edge to become alive in order to traverse it. Despitethe fact that this strategy may not sound naturally-motivated enough, it can act as a handy toolwhen studying other, more natural, random walk models as in the case of this paper. Indeed, westudy the natural random walk on such graphs, namely the
Random Walk on What’s Available ( RWA ), where at each time step the agent only considers the currently alive incident edges andchooses to traverse one out of them uniformly at random.For the case k = 0, that is, when each edge appears at each round with a fixed probability p regardless of history, we prove that the cover time for RWD is upper bounded by C G /p , where C G is the cover time of a simple random walk on the (static) underlying graph G . The resultcan be obtained both by a careful mapping of the RWD walk to its corresponding simple randomwalk on the static graph and by generalizing the standard electrical network theory literature in[9, 14]. Later, we proceed to prove that the cover time for
RWA is between C G / (1 − (1 − p ) ∆ ) and C G / (1 − (1 − p ) δ ) where δ , respectively ∆, is the minimum, respectively maximum, degree of theunderlying graph. The main idea here is to reduce RWA to an
RWD walk, where at each step thetraversal delay is lower, respectively upper, bounded by (1 − (1 − p ) δ ), respectively (1 − (1 − p ) ∆ ).For k = 1, the stochastic rule takes into account the previous, one time step ago, state of theedge. If an edge was not present, then it becomes alive with probability p , whereas if it was alive,then it dies with probability q . For RWD , we show a C G /ξ min upper bound by considering theminimum probability guarantee of existence at each round, i.e., ξ min = min { p, − q } . Similarly,we show a C G /ξ max lower bound, where ξ max = max { p, − q } .Consequently, we demonstrate an exact, exponential-time approach to determine the precisecover time value for a general setting of stochastically-evolving graphs, including also the edge-independent model considered in this paper.Finally, we conduct a series of experiments on calculating the cover time of RWA ( k = 0 case)on various underlying graphs. We compare our experimental results with the achieved theoreticalbounds. 3 .3 Outline In Section 2, we provide preliminary definitions and results regarding important concepts and toolsthat we use in later sections. Then, in Section 3, we define our model of stochastically-evolvinggraphs in a more rigorous fashion. Afterwards, in Sections 4 and 5, we provide the analysis of ourcover time bounds when for determining the current state of an edge we take into account its last 0and 1 states, respectively. In Section 6, we demonstrate an exact approach for determining the covertime for general stochastically-evolving graphs. Then, in Section 7, we present some experimentalresults on, zero-step history,
RWA cover time and compare them to the corresponding theoreticalbounds in Section 4. Finally, in Section 8, we cite some concluding remarks.
Let us hereby define a few standard notions related to a simple random walk performed by a singleagent on a simple connected graph G = ( V, E ). By d ( v ), we denote the degree, i.e., the number ofneighbors, of a node v ∈ V . A simple random walk is a Markov chain where, for v, u ∈ V , we set p vu = 1 /d ( v ), if ( v, u ) ∈ E , and p vu = 0, otherwise. That is, an agent performing the walk choosesthe next node to visit uniformly at random amongst the set of neighbors of its current node. Giventwo nodes v , u , the expected time for a random walk starting from v to arrive at u is called the hitting time from v to u and is denoted by H vu . The cover time of a random walk is the expectedtime until the agent has visited each node of the graph at least once. Let P stand for the stochasticmatrix describing the transition probabilities for a random walk (or, in general, a discrete-timeMarkov chain) where p ij denotes the probability of transition from node i to node j , p ij ≥ i, j and P j p ij = 1 for all i . Then, the matrix P t consists of the transition probabilities tomove from one node to another after t time steps and we denote the corresponding entries as p ( t ) ij .Asymptotically, lim t →∞ P t is referred to as the limiting distribution of P . A stationary distribution for P is a row vector π such that πP = π and P i π i = 1. That is, π is not altered after anapplication of P . If every state can be reached from another in a finite number of steps, i.e., P is irreducible , and the transition probabilities do not exhibit periodic behavior with respect to time,i.e., gcd { t : p ( t ) ij > } = 1, then the stationary distribution is unique and it matches the limitingdistribution ( Fundamental Theorem of Markov chains ). The mixing time is the expected numberof time steps until a Markov chain approaches its stationary distribution.In order to derive lower bounds for
RWA , we use the following graph family, commonly knownas lollipop graphs , capturing the maximum cover time for a simple random walk, e.g. see [7, 15].
Definition 1.
A lollipop graph L kn consists of a clique on k nodes and a path on n − k nodesconnected with a cut-edge, i.e., an edge whose deletion makes the graph disconnected. Let us define a general model of a dynamically evolving graph. Let G = ( V, E ) stand for a simple, connected graph, from now on referred to as the underlying graph of our model. The number ofnodes is given by n = | V | , while the number of edges is denoted by m = | E | . For a node v ∈ V ,let N ( v ) = { u : ( v, u ) ∈ E } stand for the open neighborhood of v and d ( v ) = | N ( v ) | for the (static)degree of v . Note that we make no assumptions regarding the topology of G , besides connectedness.We refer to the edges of G as the possible edges of our model. We consider evolution over a sequenceof discrete time steps (namely 0 , , , . . . ) and denote by G = ( G , G , G , . . . ) the infinite sequence4f graphs G t = ( V t , E t ), where V t = V and E t ⊆ E . That is, G t is the graph appearing at time step t and each edge e ∈ E is either alive (if e ∈ E t ) or dead (if e / ∈ E t ) at time step t .Let R stand for a stochastic rule dictating the probability that a given possible edge is alive atany time step. We apply R at each time step and at each edge independently to determine the setof currently alive edges, i.e., the rule is uniform with regard to the edges. In other words, let e t stand for a random variable where e t = 1, if e is alive at time step t , or e t = 0, otherwise. Then, R determines the value of Pr( e t = 1 | H t ) where H t is also determined by R and denotes the historylength, i.e., the values of e t − , e t − , . . . , considered when deciding for the existence of an edge attime step t . For instance, H t = ∅ means no history is taken into account, while H t = { e t − } meansthe previous state of e is taken into account when deciding for its current state.Overall, the aforementioned Edge-Uniform Evolution model (shortly
EUE ) is defined by theparameters G , R and some initial input instance G . In the following sections, we consider somespecial cases for R and provide some first bounds for the cover time of G under this model. Eachtime step of evolution consists of two stages: in the first stage, the graph G t is fixed for time step t following R , while in the second stage, the agent moves to a node in N t [ v ] = { v } ∪ { u ∈ V : ( v, u ) ∈ E t } . Notice that, since G is connected, then the cover time under EUE is finite, since R modelsedge-specific delays. We hereby analyze the cover time of G under EUE in the special case when no history is takeninto consideration for computing the probability that a given edge is alive at the current time step.Intuitively, each edge appears with a fixed probability p at every time step independently of theothers. More formally, for all e ∈ E and time steps t , Pr( e t = 1) = p ∈ [0 , A first approach toward covering G with a single agent is the following: The agent is randomlywalking G as if all edges were present and, when an edge is not present, it just waits for it to appearin a following time step. More formally, suppose the agent arrives on a node v ∈ V with (static)degree d ( v ) at the second stage of time step t . Then, after the graph is fixed for time step t + 1,the agent selects a neighbor of v , say u ∈ N ( v ), uniformly at random, i.e., with probability d ( v ) . If( v, u ) ∈ E t +1 , then the agent moves to u and repeats the above procedure. Otherwise, it remainson v until the first time step t ′ > t + 1 such that ( v, u ) ∈ E t ′ and then moves to u . This way, p actsas a delay probability, since the agent follows the same random walk it would on a static graph, butwith an expected delay of p time steps at each node. Notice that, in order for such a strategy tobe feasible, each node must maintain knowledge about its neighbors in the underlying graph; notjust the currently alive ones. From now on, we refer to this strategy for the agent as the RandomWalk with a Delay (shortly
RWD ).Now, let us upper bound the cover time of
RWD by exploiting its strong correlation to a simplerandom walk on the underlying graph G via Wald’s Equation (Theorem 1). Below, let C G standfor the cover time of a simple random walk on the static graph G . Theorem 1 ([24]) . Let X , X , . . . , X N be a sequence of real-valued, independent and identicallydistributed random variables where N is a nonnegative integer random variable independent ofthe sequence (in other words, a stopping time for the sequence). If each X i and N have finiteexpectations, then it holds E [ X + X + . . . + X N ] = E [ N ] · E [ X ]5 heorem 2. For any connected underlying graph G evolving under the zero-step history EUE , thecover time for
RWD is expectedly C G /p .Proof. Consider a simple random walk, shortly
SRW , and an
RWD (under the
EUE model) takingplace on a given connected graph G . Given that RWD decides on the next node to visit uniformlyat random based on the underlying graph, that is, in exactly the same way
SRW does, we use acoupling argument to enforce
RWD and
SRW to follow the exact same trajectory, i.e., sequence ofvisited nodes.Then, let the trajectory end when each node in G has been visited at least once and denote by T the total number of node transitions made by the agent. Such a trajectory under SRW will cover allnodes in expectedly E [ T ] = C G time steps. On the other hand, in the RWD case, for each transitionwe have to take into account the delay experienced until the chosen edge becomes available. Let D i ≥ ≤ i ≤ T stands for the actual delay corresponding tonode transition i in the trajectory. Then, the expected number of time steps till the trajectory isrealized is given by E [ D + . . . + D T ]. Since the random variables D i are independent and identicallydistributed by the edge-uniformity of our model, T is a stopping time for them and all of them havefinite expectations, then by Theorem 1 we get E [ D + . . . + D T ] = E [ T ] · E [ D ] = C G · /p .For an explicit general bound on RWD , it suffices to use C G ≤ m ( n −
1) proved in [1].
A Modified Electrical Network.
Another way to analyze the above procedure is to make useof a modified version of the standard literature approach of electrical networks and random walks[9, 14]. This point of view gives us expressions for the hitting time between any two nodes of theunderlying graph. That is, we hereby (in Lemmata 3, 4 and Theorem 5) provide a generalization ofthe results given in [9, 14] thus correlating the hitting and commute times of
RWD to an electricalnetwork analog and reaching a conclusion for the cover time similar to the one of Theorem 2.In particular, given the underlying graph G , we design an electrical network, N ( G ), with thesame edges as G , but where each edge has a resistance of r = p ohms. Let H u,v stand for the hittingtime from node u to node v in G , i.e. the expected number of time steps until the agent reaches v after starting from u and following RWD . Furthermore, let φ u,v declare the electrical potentialdifference between nodes u and v in N ( G ) when, for each w ∈ V , we inject d ( w ) amperes of currentinto w and withdraw 2 m amperes of current from a single node v . We now upper-bound the covertime of G under RWD by correlating H u,v to φ u,v . Lemma 3.
For all u, v ∈ V , H u,v = φ u,v holds.Proof. Let us denote by C uw the current flowing between two neighboring nodes u and w . Then, d ( u ) = P w ∈ N ( u ) C uw since at each node the total inward current must match the total outwardcurrent (Kirchhoff’s first law). Moving forward, C uw = φ uw /r = φ uw / (1 /p ) = p · φ uw by Ohm’slaw. Finally, φ uw = φ uv − φ wv since the sum of electrical potential differences forming a path isequal to the total electrical potential difference of the path (Kirchhoff’s second law). Overall, wecan rewrite d ( u ) = P w ∈ N ( u ) p ( φ u,v − φ w,v ) = d ( u ) · p · φ u,v − p P w ∈ N ( u ) φ w,v . Rearranging gives φ u,v = 1 p + 1 d ( u ) X w ∈ N ( u ) φ w,v . Regarding the hitting time from u to v , we rewrite it based on the first step: H u,v = 1 p + 1 d ( u ) X w ∈ N ( u ) H w,v RWD , and the second addend stands for the expected time for the rest of the walk.Wrapping it up, since both formulas above hold for each u ∈ V \ { v } , therefore inducing twoidentical linear systems of n equations and n variables, it follows that there exists a unique solutionto both of them and H u,v = φ u,v .In the lemma below, let R u,v stand for the effective resistance between u and v , i.e., the electricalpotential difference induced when flowing a current of one ampere from u to v . Lemma 4.
For all u, v ∈ V , H u,v + H v,u = 2 mR u,v holds.Proof. Similarly to the definition of φ u,v above, one can define φ v,u as the electrical potentialdifference between v and u when d ( w ) amperes of current are injected into each node w and 2 m of them are withdrawn from node u . Next, note that changing all currents’ signs leads to a newnetwork where for the electrical potential difference, namely φ ′ , it holds φ ′ u,v = φ v,u . We can nowapply the Superposition Theorem (see Section 13.3 in [6]) and linearly superpose the two networksimplied from φ u,v and φ ′ u,v creating a new one where 2 m amperes are injected into u , 2 m amperesare withdrawn from v and no current is injected or withdrawn at any other node. Let φ ′′ u,v standfor the electrical potential difference between u and v in this last network. By the superpositionargument, we get φ ′′ u,v = φ u,v + φ ′ u,v = φ u,v + φ v,u , while from Ohm’s law we get φ ′′ u,v = 2 m · R u,v .The proof concludes by combining these two observations and applying Lemma 3. Theorem 5.
For any connected underlying graph G evolving under the zero-step history EUE , thecover time for
RWD is at most m ( n − /p .Proof. Consider a spanning tree T of G . An agent, starting from any node, can visit all nodes byperforming an Eulerian tour on the edges of T (crossing each edge twice). This is a feasible way tocover G and thus the expected time for an agent to finish the above task provides an upper boundon the cover time. The expected time to cover each edge twice is given by P ( u,v ) ∈ E T ( H u,v + H v,u )where E T is the edge-set of T with | E T | = n −
1. By Lemma 4, this is equal to 2 m P ( u,v ) ∈ E T R u,v =2 m P ( u,v ) ∈ E T p = 2 m ( n − /p . Random Walk with a Delay does provide a nice connection to electrical network theory. However,depending on p , there could be long periods of time where the agent is simply standing still on thesame node. Since the walk is random anyway, waiting for an edge to appear may not sound verywise. Hence, we now analyze the strategy of a Random Walk on what’s Available (shortly
RWA ).That is, suppose the agent has just arrived at a node v after the second stage at time step t andthen E t +1 is fixed after the first stage at time step t + 1. Now, the agent picks uniformly at randomonly amongst the alive incident edges at time step t + 1. Let d t +1 ( v ) stand for the degree of node v in G t +1 . If d t +1 ( v ) = 0, then the agent does not move at time step t + 1. Otherwise, if d t +1 ( v ) > d t +1 ( v ) . The agent then follows theselected edge to complete the second stage of time step t + 1 and repeats the strategy. In a nutshell,the agent keeps moving randomly on available edges and only remains on the same node if no edgeis alive at the current time step. Below, let δ = min v ∈ V d ( v ) and ∆ = max v ∈ V d ( v ). Theorem 6.
For any connected underlying graph G with min-degree δ and max-degree ∆ evolvingunder the zero-step history EUE , the cover time for
RWA is at least C G / (1 − (1 − p ) ∆ ) and at most C G / (1 − (1 − p ) δ ) . roof. Suppose the agent follows
RWA and has reached node u ∈ V after time step t . Then, G t +1 becomes fixed and the agent selects uniformly at random a neighboring edge to move to. Let M uv (where v ∈ { w ∈ V : ( u, w ) ∈ E } ) stand for a random variable taking value 1 if the agent movesto node v and 0 otherwise. For k = 1 , , . . . , d ( u ) = d , let A k stand for the event that d t +1 ( u ) = k .Therefore, Pr( A k ) = (cid:0) dk (cid:1) p k (1 − p ) d − k is exactly the probability k out of the d edges exist since eachedge exists independently with probability p . Now, let us consider the probability Pr( M uv = 1 | A k ):the probability v will be reached given that k neighbors are present. This is exactly the productof the probability that v is indeed in the chosen k -tuple (say p ) and the probability that then v is chosen uniformly at random (say p ) from the k -tuple. p = (cid:0) d − k − (cid:1) / (cid:0) dk (cid:1) = kd since the model isedge-uniform and we can fix v and choose any of the (cid:0) d − k − (cid:1) k -tuples with v in them out of the (cid:0) dk (cid:1) total ones. On the other hand, p = k by uniformity. Overall, we get Pr( M uv = 1 | A k ) = p · p = d .We can now apply the total probability law to calculate Pr( M uv = 1) = P dk =1 Pr( M uv = 1 | A k ) Pr( A k ) = d P dk =1 (cid:0) dk (cid:1) p k (1 − p ) d − k = d (1 − (1 − p ) d ) To conclude, let us reduce
RWA to RWD . Indeed, in
RWD the equivalent transition probabilityis Pr( M uv = 1) = d p , accounting both for the uniform choice and the delay p . Therefore, the RWA probability can be viewed as d p ′ where p ′ = (1 − (1 − p ) d ). To achieve edge-uniformity we set p ′ = (1 − (1 − p ) δ ) which lower bounds the delay of each edge and finally we can apply the same RWD analysis by substituting p by p ′ . Similarly, we can set the upper-bound delay p ′′ = (1 − (1 − p ) ∆ )to lower-bound the cover time. Applying Theorem 2 completes the proof.The value of δ used to lower-bound the transition probability may be a harsh estimate for generalgraphs. However, it becomes quite more accurate in the special case of a d -regular underlying graphwhere δ = ∆ = d . To conclude this section, we provide a worst-case lower bound on the cover timebased on similar techniques as above. Lemma 7.
There exists an underlying graph G evolving under the zero-step history EUE such thatthe
RWA cover time is at least Ω( mn/ (1 − (1 − p ) ∆ )) .Proof. We consider the L n/ n lollipop graph which is known to attain a cover time of Ω( mn ) for asimple random walk [7, 15]. Applying the lower bound from Theorem 6 completes the proof. We now turn our attention to the case where the current state of an edge affects its next state.That is, we take into account a history of length one when computing the probability of existencefor each edge independently. A Markovian model for this case was introduced in [10]; see Table 1.The left side of the table accounts for the current state of an edge, while the top for the next one.The respective table box provides us with the probability of transition from one state to the other.Intuitively, another way to refer to this model is as the
Birth-Death model: a dead edge becomesalive with probability p , while an alive edge dies with probability q .Table 1: Birth-Death chain for a single edge [10] dead alivedead − p p alive q − q G evolving under the EUE model where each possibleedge independently follows the aforementioned stochastic rule of evolution. ( p, q ) -Graphs Let us hereby derive some first bounds for the cover time of
RWD via a min-max approach. Theidea here is to make use of the ”being alive” probabilities to prove lower and upper bounds for thecover time parameterized by ξ min = min { p, − q } and ξ max = max { p, − q } .Let us consider an RWD walk on a general connected graph G evolving under EUE with azero-step history rule dictating Pr( e t = 1) = ξ min for any edge e and time step t . We refer to thiswalk as the Upper Walk with a Delay , shortly
UWD . Respectively, we consider an
RWD walk whenthe stochastic rule of evolution is given by Pr( e t = 1) = ξ max . We refer to this specific walk as the Lower Walk with a Delay , shortly
LWD . Below, we make use of
UWD and
LWD in order to boundthe cover time of
RWD in general ( p, q )-graphs.
Theorem 8.
For any connected underlying graph G and the Birth-Death rule, the cover time of RWD is at least C G /ξ max and at most C G /ξ min .Proof. Regarding
UWD , one can design a corresponding electrical network where each edge has aresistance of 1 /ξ min capturing the expected delay till any possible edge becomes alive. ApplyingTheorem 2, gives an upper bound for the UWD cover time.Let C ′ stand for the UWD cover time and C stand for the cover time of RWD under theBirth-Death rule. It now suffices to show C ≤ C ′ to conclude.In Birth-Death, the expected delay before each edge traversal is either 1 /p , in case the possibleedge is dead, or 1 / (1 − q ), in case the possible edge is alive. In both cases, the expected delay isupper-bounded by the 1 /ξ min delay of UWD and therefore C ≤ C ′ follows since any trajectoryunder RWD will take at most as much time as the same trajectory under
UWD .In a similar manner, the cover time of
LWD lower bounds the cover time of
RWD and, byapplying Theorem 2, we derive a lower bound of C G /ξ max . So far, we have established upper and lower bounds for the cover time of edge-uniform stochastically-evolving graphs. Our bounds are based on combining extended results from simple random walktheory and careful delay estimations. In this section, we describe an approach to determine the exact cover time for temporal graphs evolving under any stochastic model. Then, we apply thisapproach to the already seen zero-step history and one-step history cases of
RWA .The key component of our approach is a Markov chain capturing both phases of evolution:the graph dynamics and the walk trajectory. In that case, calculating the cover time reduces tocalculating the hitting time to a particular subset of Markov states. Although computationallyintractable for large graphs, such an approach provides the exact cover time value and is hencepractical for smaller graphs.Suppose we are given an underlying graph G = ( V, E ) and a set of stochastic rules R capturingthe evolution dynamics of G . That is, R can be seen as a collection of probabilities of transitionfrom one graph instance to another. We denote by k the (longest) history length taken into accountby the stochastic rules. Like before, let n = | V | stand for the number of nodes and m = | E | forthe number of possible edges of G . We define a Markov chain M with states of the form ( H, v, V c ),where 9 H = ( H , H , . . . , H k ), is a k -tuple of temporal graph instances , that is, for each i = 1 , , . . . , k , H i is the graph instance present i − H ) • v ∈ V ( G ) is the current position of the agent • V c ⊆ V ( G ) is the set of already covered nodes, i.e., the set of nodes which have been visitedat least once by the agentAs described earlier for our edge-uniform model, we assume evolution happens in two phases.First, the new graph instance is determined according to the rule-set R . Second, the new agentposition is determined based on a random walk on what’s available. In this respect, consider astate S = ( H, v, V c ) and another state S ′ = ( H ′ , v ′ , V ′ c ) of the described Markov chain M . LetPr[ S → S ′ ] denote the transition probability from S to S ′ . We seek to express this probability asa product of the probabilities for the two phases of evolution. The latter is possible, since, in ourmodel, the random walk strategy is independent of the graph evolution.For the graph dynamics, let Pr[ H R −→ H ′ ] stand for the probability to move from a history-tuple H to another history-tuple H ′ under the rules of evolution in R . Note that, for i = 1 , , . . . , k −
1, itmust hold H ′ i +1 = H i in order to properly maintain history, otherwise the probability becomes zero.On the other hand, for valid transitions, the probability reduces to Pr[ H ′ | ( H , H , . . . , H k )], whichis exactly the probability that H ′ becomes the new instance given the history H = ( H , H , . . . , H k )of past instances (and any such probability is either given directly or implied by R ).For the second phase, i.e., the random walk on what’s available, we denote by Pr[ v H j −→ v ′ ] theprobability of moving from v to v ′ on some graph instance H j . Since, the random walk strategy isonly based on the current instance, we can derive a general expression for this probability, whichis independent of the graph dynamics R . Below, let N H j ( v ) stand for the set of neighbors of v ingraph instance H j . If { v, v ′ } 6∈ E ( G ), that is, if there is no possible edge between v and v ′ , thenfor any temporal graph instance H j , it holds Pr[ v H j −→ v ′ ] = 0. The probability is also zero forall graph instances H j where the possible edge is not alive, i.e., { v, v ′ } 6∈ E ( H j ). In contrast, if { v, v ′ } ∈ E ( H j ), then Pr[ v H j −→ v ′ ] = | N H j ( v ) | − , since the agent chooses a destination uniformlyat random out of the currently alive ones. Finally, if v = v ′ , then the agent remains still, withprobability 1, only if there exist no alive incident edges. We summarize the above facts in thefollowing equation: Pr[ v H j −→ v ′ ] = N H j ( v ) = ∅ and v ′ = v | N H j ( v ) | − , if v ′ ∈ N H j ( v )0 , otherwise (1)Overall, we combine the two phases in M and introduce the following transition probabilities. • If | V c | < n : Pr[(
H, v, V c ) → ( H ′ , v ′ , V ′ c )] = Pr[ H R −→ H ′ ] · Pr[ v H ′ −−→ v ′ ] , if v ′ ∈ V ′ c and V ′ c = V c Pr[ H R −→ H ′ ] · Pr[ v H ′ −−→ v ′ ] , if v ′ = v, v ′ V ′ c and V ′ c = V c ∪ { v ′ } , otherwise • If | V c | = n : Pr[( H, v, V c ) → ( H ′ , v ′ , V ′ c )] = ® H = H ′ , v = v ′ , V c = V ′ c | V c | < n , notice that only two cases may have a non-zero probability with respect to thegrowth of V c . If the newly visited node v ′ is already covered, then V ′ c must be identical to V c sinceno new nodes are covered during this transition. Further, if a new node v ′ is not yet covered, then V ′ c is updated to include it as well as all the covered nodes in V c .For | V c | = n , the idea is that once such a state has been reached, and so all nodes are covered,then there is no need for further exploration. Therefore, such a state can be made absorbing . Inthis respect, let us denote the set of these states as Γ = { ( H, v, V c ) ∈ M : | V c | = n } . Definition 2.
Let
ECT(
G, R ) be the problem of determining the exact value of the cover time foran RWA on a graph G stochastically evolving under rule-set R . Theorem 9.
Assume all probabilities of the form
Pr[ H R −→ H ′ ] used in M are exact reals and knowna priori. Then, for any underlying graph G and stochastic rule-set R , it holds that ECT(
G, R ) ∈ EXPTIME .Proof.
For each temporal graph instance, H i , in the worst case, there exist 2 m possibilities, sinceeach of the m possible edges is either alive or dead at a graph instance. For the whole history H ,the number of possibilities becomes (2 m ) k = 2 k · m by taking the product of k such terms. Thereare n possibilities for the walker’s position v . Finally, for each v ∈ V ( G ), we only allow states suchthat v ∈ V c . Therefore, since we fix v , there are up to n − V c leading to a total of O (2 n − ) possibilities for V c . Taking everything into account, M has a total of O (2 k · m + n − n ) states.Let H s, Γ stand for the hitting time of Γ when starting from a state s ∈ M . Assuming exact realarithmetic, we can compute all such hitting times by solving the following system (Theorem 1.3.5[21]): ® H s, Γ = 0 , ∀ s ∈ Γ H s, Γ = 1 + P s ′ Γ Pr[ s → s ′ ] · H s ′ , Γ , ∀ s ΓLet C stand for the cover time of an RWA on G evolving under R . By definition, the cover timeis the expected time till all nodes are covered, regardless of the position of the walker at that time.Consider the set S = { ( H, v, { v } ) ∈ M : v ∈ V ( G ) } of start positions for the agent as depictedin M . Then, it follows C = max s ∈ S H s, Γ , where we take the worst-case hitting time to a statein Γ over any starting position of the agent. In terms of time complexity, computing C requirescomputing all values H s, Γ , for every s ∈ S . To do so, one must solve the above linear system ofsize O (2 k · m + n − n ), which can be done in time exponential to input parameters n, m and k .It’s noteworthy to remark that this approach is general in the sense that there are no assumptionson the graph evolution rule-set R besides it being stochastic, i.e., describing the probability oftransition from each graph instance to another given some history of length k . In this regard,Theorem 9 captures both the case of Markovian Evolving Graphs [3] and the case of Edge-UniformGraphs considered in this paper. We now proceed and show how the aforementioned generalapproach applies to the zero-step and one-step history cases of Edge-Uniform Graphs. To do so,we calculate the corresponding graph-dynamics probabilities. The random walk probabilities aregiven in Equation 1. RWA on Edge-Uniform Graphs (Zero-Step History).
Based on the general model, werewrite the transition probabilities for the special case when
RWA takes place on an edge-uniformgraph without taking into account any memory, i.e., the same case as in Section 4. Notice that,11ince past instances are not considered in this case, the history-tuple reduces to a single graphinstance H . We rewrite the transition probabilities, for the case | V c | < n , as follows: Pr[(
H, v, V c ) → ( H, v ′ , V ′ c )] = Pr[ H ′ | H ] · Pr[ v H ′ −→ v ′ ] , if v ′ ∈ V ′ c and V ′ c = V c Pr[ H ′ | H ] · Pr[ v H ′ −→ v ′ ] , if v ′ = v, v ′ V ′ c and V ′ c = V c ∪ { v ′ } , otherwise Let α stand for the number of edges alive in H ′ . Since there is no dependence on history andeach edge appears independently with probability p , we get Pr[ H ′ | H ] = Pr[ H ′ ] = p α · (1 − p ) m − α . RWA on Edge-Uniform Graphs (One-Step History).
We hereby rewrite the transitionprobabilities for a Markov chain capturing an
RWA taking place on an edge-uniform graph where,at each time step, the current graph instance is taken into account to generate the next one. Thiscase is related to the results in Section 5. Due to the history inclusion, the transition probabilitiesbecome more involved than those seen for the zero-history case. Again, we consider the non-absorbing states, where | V c | < n . Pr[(( H , H ) , v, V c ) → (( H ′ , H ′ ) , v ′ , V ′ c )] = Pr[( H , H ) → ( H ′ , H ′ )] · Pr[ v H ′ −−→ v ′ ] , if v ′ ∈ V ′ c and V ′ c = V c Pr[( H , H ) → ( H ′ , H ′ )] · Pr[ v H ′ −−→ v ′ ] , if v ′ V ′ c and V ′ c = V c ∪ { v ′ } , otherwise If H ′ = H , i.e., if it does not hold that, for each e ∈ G , e ∈ H ′ if and only if e ∈ H , thenPr[( H , H ) → ( H ′ , H ′ )] = 0, otherwise the history is not properly maintained. On the otherhand, if H ′ = H , then Pr[( H , H ) → ( H ′ , H ′ )] = Pr[( H , H ) → ( H ′ , H )] = Pr[ H ′ | H ]. Toderive an expression for the latter, we need to consider all edge (mis)matches between H ′ and H ,and properly apply the Birth-Death rule (Table 1). Below, we denote by D ( H ) = E ( G ) \ E ( H )the set of possible edges of G , which are dead at instance H . Let c = | D ( H ) ∩ D ( H ′ ) | , c = | D ( H ) ∩ E ( H ′ ) | , c = | E ( H ) ∩ D ( H ′ ) | and c = | E ( H ) ∩ E ( H ′ ) | . Each of the c edges wasdead in H and remained dead in H ′ , with probability 1 − p . Similarly, each of the c edges wasdead in H and became alive in H ′ , with probability p . Also, each of the c edges was alive in H and died in H ′ , with probability q . Finally, each of the c edges was alive in H and remainedalive in H ′ , with probability 1 − q . Overall, due to the edge-independence of the model, we getPr[ H ′ | H ] = (1 − p ) c · p c · q c · (1 − q ) c . In this section, we discuss some experimental results to complement our previously-establishedtheoretical bounds. We simulate an
RWA taking place in graphs evolving under the zero-stephistory model. We provide an experimental estimation of the value of the cover time for such awalk. To do so, for each specific graph and value of p considered, we repeat the experiment a largenumber of times, e.g., at least 1000 times. In the first experiment, we start from a graph instancewith no alive edges. At each step, after the graph evolves, the walker picks uniformly at randoman incident alive edge to traverse. The process continues till all nodes are visited at least once.Each next experiment commences with the last graph instance of the previous experiment as itsfirst instance.We construct underlying graphs in the following fashion: given a natural number n , we ini-tially construct a path on n nodes, namely v , v , . . . , v n . Afterward, for each two distinct nodes v i randomThreshold = 0 . Size δ ∆ p Static Cover Time Temporal Cover Time Lower Bound Upper Bound10 6 9 0.9 28 28 28 2810 7 9 0.5 28 28 28 2810 7 9 0.2 27 31 31 3410 7 9 0.1 29 50 47 6110 7 9 0.05 28 78 76 9310 7 8 0.01 28 356 83 413100 74 92 0.9 535 535 535 535100 74 91 0.05 530 543 535 543100 76 92 0.01 536 912 888 1003100 74 92 0.005 541 1476 1465 1746250 197 229 0.99 1551 1551 1551 1551250 194 228 0.75 1555 1555 1555 1555250 192 225 0.01 1548 1744 1728 1810250 201 228 0.005 1538 2326 2259 2423250 198 225 0.001 1546 7948 7670 8603
Table 3: Experimental Results for Randomly-Produced Graphs ( randomThreshold = 0 . Size δ ∆ p Static Cover Time Temporal Cover Time Lower Bound Upper Bound10 3 6 0.9 35 35 35 3510 3 7 0.5 33 35 34 3810 5 8 0.2 28 37 33 4110 4 8 0.1 34 69 60 10010 3 8 0.05 32 118 96 22610 3 7 0.01 33 780 486 1113100 39 60 0.9 542 542 542 542100 37 68 0.1 561 571 561 572100 35 63 0.05 556 589 579 667100 38 63 0.01 544 1349 1160 1714100 35 61 0.005 549 2436 2085 3413250 106 144 0.9 1589 1589 1589 1589250 105 145 0.025 1581 1646 1623 1700250 109 147 0.01 1579 2150 2046 2372250 105 150 0.005 1584 3324 2998 3871 and v j , we add an edge { v i , v j } with probability equal to a randomThreshold parameter. Forinstance, randomT hreshold = 0 means the graph remains a path. On the other hand, for randomT hreshold = 1, the graph becomes a clique.In Tables 2, 3 and 4, we display the average cover time, rounding it to the nearest naturalnumber, computed in some indicative experiments for randomT hreshold equal to 0 .
85, 0 . .
15 respectively. Consequently, we provide estimates for a lower and an upper bound on thetemporal cover time. In this respect, we experimentally compute a value for the cover time of asimple random walk in the underlying graph, i.e., the static cover time. Then, we plug in thisvalue in place of C G to apply the bounds given in Theorem 6. Overall, the temporal cover timescomputed appear to be within their corresponding lower and upper bounds.13able 4: Experimental Results for Randomly-Produced Graphs ( randomThreshold = 0 . Size δ ∆ p Static Cover Time Temporal Cover Time Lower Bound Upper Bound10 2 5 0.9 38 38 38 3810 1 5 0.5 62 70 64 12510 2 4 0.2 41 88 69 11310 2 5 0.1 48 176 117 25210 1 5 0.05 46 361 203 91910 2 4 0.01 38 1356 959 1899100 9 28 0.9 671 671 671 671100 8 24 0.1 634 740 689 1113100 11 25 0.05 616 1033 852 1428100 9 24 0.01 694 4152 3240 8028100 10 23 0.005 642 7873 5894 13127250 25 57 0.9 1708 1708 1708 1708250 27 59 0.1 1700 1739 1700 1803250 23 54 0.01 1750 5167 4179 8480250 23 54 0.005 1736 9601 7321 15944
We defined the general
Edge-Uniform Evolution model for a stochastically-evolving graph, wherea single stochastic rule is applied, but to each edge independently, and provided lower and upperbounds for the cover time of two random walks taking place on such a graph (cases k = 0 , RWA in the Birth-Death model. In this case, the problem becomes quite more complex than the k = 0case. Depending on the values of p and q , the walk may be heavily biased, positively or negatively,toward possible edges incident to the walker’s position, which were used in the recent past. Acknowledgments.
We would like to acknowledge two anonymous reviewers for spotting techni-cal errors in the previously attempted analysis of the one-step history
RWA . Also, we acknowledgeanother anonymous reviewer, who suggested using Theorem 2 as an alternative to electrical networktheory and some other useful modifications.
References [1] R. Aleliunas, R. Karp, R. Lipton, L. Lovasz, and C.Rackoff, Random walks, universal traver-sal sequences and the complexity of maze problems,
In 20th IEEE Annual Symposium onFoundations of Computer Science , pp. 218-223, 1979.[2] D. Aldous and J.A. Fill, Reversible Markov Chains and Random Walks on Graphs ,
UnfinishedMonograph , 2002.[3] C. Avin, M. Kouck´y, and Z. Lotker, How to Explore a Fast-Changing World (Cover Timeof a Simple Random Walk on Evolving Graphs),
In Proceedings of the 35th internationalcolloquium on Automata, Languages and Programming (ICALP ’08) , Springer, pp. 121-132,2008.[4] J. Bar-Ilan and D. Zernik, Random leaders and random spanning trees,
WDAG89 , pp. 1-12,1989. 145] H. Baumann, P. Crescenzi, and P. Fraigniaud, Parsimonious flooding in dynamic graphs,
InProceedings of the 28th ACM symposium on Principles of distributed computing (PODC ’09) ,ACM, pp. 260-269, 2009.[6] J. Bird, Electrical Circuit Theory and Technology, 5th Ed., Routledge, 2013.[7] G. Brightwell and P. Winkler, Maximum hitting time fo random walks on graphs,
RandomStructures and Algorithms , vol. 1, pp. 263-276, 1990.[8] M. Bui, T. Bernard, D. Sohier and A. Bui, Random Walks in Distributed Computing: ASurvey,
IICS 2004 , LNCS 3473, pp. 1-14, Springer, 2006.[9] A. K. Chandra, P. Raghavan, W. L. Ruzzo, and R. Smolensky, The electrical resistance of agraph captures its commute and cover times,
In Proceedings of the twenty-first annual ACMsymposium on Theory of computing (STOC ’89) , ACM, pp. 574-586, 1989.[10] A. E. F. Clementi, C. Macci, A. Monti, F. Pasquale, and R. Silvestri, Flooding time inedge-Markovian dynamic graphs,
PODC ’08 , ACM, pp. 213-222, 2008.[11] A. Clementi, A. Monti, F. Pasquale, and R. Silvestri, Information Spreading in StationaryMarkovian Evolving Graphs,
IEEE Trans. Parallel Distrib. Syst. , 22, 9, pp. 1425-1432, 2011.[12] A. Clementi, A. Monti, F. Pasquale and R. Silvestri, Communication in dynamic radio net-works,
PODC ’07 , ACM, pp. 205-214, 2007.[13] J.-C. Delvenne, R. Lambiotte, and L.E.C. Rocha, Diffusion on networked systems is a questionof time or structure,
Nature Communications , 6, no. 7366, 2015.[14] P. G. Doyle and J. L. Snell, Random Walks and Electric Networks, 2006.[15] U. Feige, A tight upper bound on the cover time for random walks on graphs,
RandomStructures and Algorithms , vol. 6, pp. 51-54, 1995.[16] D. Figueiredo, P. Nain, B. Ribeiro, E. de Souza e Silva, and D. Towsley, CharacterizingContinuous Time Random Walks on Time Varying Graphs,
Proceedings of the 12th ACMSIGMETRICS/PERFORMANCE joint international conference on Measurement and Mod-eling of Computer Systems , pp. 307-318, 2012.[17] M. Habib, C. McDiarmid, J. Ramirez-Alfonsin, B. Reed, Probabilistic Methods for Algorith-mic Discrete Mathematics, Springer, 1998.[18] T. Hoffmann, M. A. Porter, R. Lambiotte, Random Walks on Stochastic Temporal Networks,
Temporal Networks , Springer-Verlag, pp. 295-313, 2013.[19] P. Holme, and J. Saram¨aki, Temporal Networks,
Phys. Rep. , 519, pp. 97-125, 2012.[20] O. Michail, An Introduction to Temporal Graphs: An Algorithmic Perspective,
InternetMathematics , 12 (4), pp. 239-280, 2016.[21] J.R. Norris, Markov Chains, Cambridge University Press, 1998.[22] L.E.C. Rocha and N. Masuda, Random walk centrality for temporal networks,
New Journalof Physics , vol. 16, 2014.[23] M. Starnini, A. Baronchelli, A. Barrat, and R. Pastor-Satorras, Random walks on temporalnetworks,
Phys. Rev. E 85, 056115 , 2012.[24] A. Wald, Sequential Analysis.
John Wiley & Sons , New York, 1947.[25] Y. Yamauchi, T. Izumi, and S. Kamei, Mobile Agent Rendezvous on a Probabilistic EdgeEvolving Ring,