AAdding edge dynamics to wireless random-access networks
Matteo Sfragara February 10, 2021
Abstract
We consider random-access networks with nodes representing servers with queues. Thenodes can be either active or inactive: a node deactivates at unit rate, while it activates arate that depends on its queue length, provided none of its neighbors is active.In order to model the effects of user mobility in wireless networks, we analyze dynamicinterference graphs where the edges are allowed to appear and disappear over time. Wefocus on bipartite graphs, and study the transition time between the two states whereone half of the network is active and the other half is inactive, in the limit as the queuesbecome large. Depending on the speed of the dynamics, we are able to obtain a roughclassification of the effects of the dynamics on the transition time.
Keywords:
Random-access networks, activation protocols, bipartite graphs, dynamic graphs,transition time.
MSC2010:
Acknowledgment:
The research in this paper was supported by the Netherlands Organisa-tion for Scientific Research (NWO) through Gravitation-grant NETWORKS-024.002.003.The author thanks professors S. Borst (Eindhoven University of Technology), F. den Hol-lander (Leiden University) and F.R. Nardi (University of Florence) for the helpful sugges-tions and the useful discussions. Mathematical Institute, Leiden University, The Netherlands a r X i v : . [ m a t h . P R ] F e b ontents A Appendix: a model with fixed activation rates 21 Introduction
The present paper is a continuation of [3] and [4]. We introduce an edge dynamics on the bipar-tite interference graph by allowing edges to appear and disappear over time. This representsa natural basic model to capture the effects of user mobility in wireless networks.In Section 1.1 we motivate our interest in adding edge dynamics to random-access networkmodels. A more extended motivation for studying (queue-based) random-access protocols isprovided in [3, Section 1.1], where also relevant references to the literature are included. InSection 1.2 we describe the setting and the mathematical model of interest in this paper byspecifying edge dynamics. In Section 1.3, after a discusson of the results from [4] for arbitrarybipartite graphs, we state the main results on the mean transition time of the network underthe effects of the dynamics. In Section 1.4 we anticipate the main idea behind our analysisand give an outline of the remainder of the paper.
User mobility is one of the major features in wireless networks. Different mobility patternscan be distinguished (pedestrians, vehicles, aerial, dynamic medium, robot, and outer spacemotion) and mathematical models can be developed in order to generalize such patterns andanalyze their characteristics. Understanding the effects of user mobility in wireless networksis crucial in order to design efficient protocols and improve the performance of the network.For example, consider radio communication protocols, for which central radio stations areused as base-stations for transmitting radio signals. The radio landscape is partitioned intocells and in each cell a station serves the users in its vicinity. In such cellular networks theusers may be either stationary or mobile. User mobility leads to problems of handover: whena user moves from one cell to another, the transmitting signal has to be handed over from onestation to another in order to ensure continuity of service and seamless mobility. If not enoughcapacity is available in the adjacent cell, then the transmission might be interrupted. Imaginethat a node transmitting to particular station moves away from its cell and reaches a cell whereanother station serves for transmissions. Although initially the node interferes with a specificgroup of nodes sharing the same initial station, after the node has moved it interferes with thenodes in the new cell sharing the new station. In a similar fashion, imagine a network wherenodes represent transmitter-receiver pairs. The signal of a node interferes with the signalsof the nodes in its vicinity. Hence the protocol allows only one of the interfering nodes totransmit alternately. When allowing node mobility, we get new groups of nodes interferingwith each other depending on their vicinity. We are therefore dealing with a network whoseinterference graph changes over time.To the best of our knowledge, random-access models with user mobility in the context ofinterference graphs have so far not been considered in the literature. All the studies we areaware of that have examined the impact of user mobility in wireless networks are concernedwith handover mechanisms (see [10], [12]), so-called opportunistic scheduling algorithms (see[5], [13]), capacity issues in ad hoc and cellular networks (see [6], [8]), and flow-level perfor-mance (see [2], [1], [7], [11]).In this paper we investigate a dynamic version of the random-access protocols in order totry to capture some features of user mobility in wireless networks. A natural paradigm forconstructing dynamic interference graphs would be to use geometric graphs, such as unit-disk3raphs, with node mobility, where each node follows a random trajectory and experiencesinterference from all nodes within a certain distance. A feasible state of the interference graphwould then be generated by a specific instance of the geometric graph. We follow a differentapproach and, with an explorative intention, we consider a model where edges are allowedto appear and disappear from the graph according to i.i.d. Poisson clocks placed on eachedge. We identify how the mean transition time depends on the speed of the dynamics. Ourapproach is based on the intuition that a node in V can activate either when its neighborsare simultaneously inactive or when the edges connecting it with its neighbors disappear.Interpolation between these two situations gives rise to different scenarios and interestingbehavior. The evolution of the network is captured by a continuous-time Markov process thatkeeps track of how the state, the queue length and the number of active neighbors change foreach node.We focus on queue-based activation rates, in line with the models and the results in [3] and[4]. This leads to two level of complexity, driven by the queue dependences of the activationrates and by the edge dynamics. Due to the lack of literature for dynamic random-accessprotocols in general, in the appendix we briefly consider a simplified version of the modelwhere the activation rates are fixed. We consider the bipartite graph G = (( U, V ) , E ), where U ∪ V is the set of nodes and E is theset of bonds that connect a node in U to a node in V , and vice versa (bonds are undirected).We recall some definitions and basic facts from [3] and [4]. Definition 1.1. [The model](1) State of a node.
A node in the network can be either active or inactive . The state ofnode w at time t is described by a Bernoulli random variable X w ( t ) ∈ { , } , defined as X w ( t ) = (cid:40) , if w is inactive at time t, , if w is active at time t. (1.1)The joint activity state at time t is denoted X ( t ) = { X w ( t ) } w ∈ U ∪ V (1.2)and it is an element of the state space X = (cid:8) X ∈ { , } N : X i X j = 0 ∀ ( i, j ) ∈ E (cid:9) , (1.3)where X i = 0 means that node i is inactive and X i = 1 that it is active. We denote by 1 U (1 V )the configuration where all nodes in U are active (inactive) and all nodes in V are inactive(active). (2) Activation and deactivation of a node. An active node w becomes inactive accordingto a deactivation Poisson clock: when the clock ticks the node deactivates. Conversely, aninactive node w attempts to become active according to an activation Poisson clock , but theattempt is successful only when no neighbours of i are active. The activation rate at node w at time t depends on the queue length at node w at time t . The deactivation rate is 1. (3) Queue length at a node. Let t (cid:55)→ Q + w ( t ) be the input process describing packets4rriving at node w according to a Poisson process t (cid:55)→ N w ( t ) = Poisson( λt ) and requiringi.i.d. exponential service times Y wn , n ∈ N , with rate µ U for w ∈ U and µ V for w ∈ V . This isa compound Poisson process with mean ρ U = λ/µ U for w ∈ U and ρ V = λ/µ V for w ∈ V . Let t (cid:55)→ Q − w ( t ) be the output process representing the cumulative amount of work that is processedby the server at node w in the time interval [0 , t ] at rate c , which equals cT w ( t ) = c (cid:82) t X w ( s ) ds .In order to ensure that the queue tends to decrease when a node is inactive, we assume that ρ U < c and ρ V < c . Define∆ w ( t ) = Q + w ( t ) − Q − w ( t ) = N w ( t ) (cid:88) n =0 Y wn − cT w ( t ) (1.4)and let s ∗ = s ∗ ( t ) be the value where sup s ∈ [0 ,t ] [∆ w ( t ) − ∆ w ( s )] is reached, i.e., equals [∆ w ( t ) − ∆ w ( s ∗ − )]. Let Q w ( t ) ∈ R ≥ denote the queue length at node w at time t . Then Q w ( t ) = max (cid:8) Q w (0) + ∆ w ( t ) , ∆ w ( t ) − ∆ w ( s ∗ − ) (cid:9) , (1.5)where Q w (0) is the initial queue length. The maximum is achieved by the first term when Q w (0) ≥ − ∆ w ( s ∗ − ) (the queue length never sojourns at 0), and by the second term when Q w (0) < − ∆ w ( s ∗ − ) (the queue length sojourns at 0 at time s ∗ − ). (4) Initial queue length. The initial queue length is assumed to be given by Q w (0) = (cid:26) γ U r, w ∈ U,γ V r, w ∈ V, (1.6)where γ U ≥ γ V >
0, and r is a parameter that tends to infinity. (5) Assumptions on the activation rates. Let g U , g V ∈ G with G = (cid:110) g : R ≥ → R ≥ : g non-decreasing and continuous , g (0) = 0 , lim x →∞ g ( x ) = ∞ (cid:111) . (1.7)The deactivation clocks tick at rate 1, while the activation clocks tick at rate r w ( t ) = (cid:26) g U ( Q w ( t )) , w ∈ U,g V ( Q w ( t )) , w ∈ V, t ≥ . (1.8)We focus on the particular choice g U ( x ) = Bx β , x ∈ [0 , ∞ ) ,g V ( x ) = B (cid:48) x β (cid:48) , x ∈ [0 , ∞ ) , (1.9)with B, B (cid:48) , β, β (cid:48) ∈ (0 , ∞ ). We assume that nodes in V are much more aggressive than nodesin U , namely, β (cid:48) > β + 1 . (1.10)This ensures that the transition from 1 U to 1 V can be decomposed into a succession of tran-sitions on complete bipartite subgraphs. (6) Transition time. Let Q U = { Q U,i } | U | i =1 be the sequence of queues associated with thenodes in U , and Q V = { Q V,j } | V | j =1 the sequence of queues associated with the nodes in V .5e denote by T QG the transition time of the graph G conditional on the initial queue lengths Q = ( Q U , Q V ) and we define it as T QG = min (cid:8) t ≥ X ( t ) = 1 V } given X (0) = 1 U . (1.11)It represents the time it takes the system to hit configuration 1 V starting from configuration1 U . (7) Forks and nucleation times. Given a node v ∈ V , we refer to the fork of v as thecomplete bipartite subgraph of G containing only node v , its neighbours in U and the edgesbetween them. The time it takes the fork of v to deactivate its nodes in U and activate v iscalled the nucleation time of the fork of v . We denote this time by T Qv , where v representsthe activating node and Q represents the state of the queues.Note that every time we consider the transition time we are conditioning on the initialqueue lengths, while every time we consider the nucleation times we are conditioning onthe state of the queues and on the activating forks. Hence, all the expectations should beinterpreted as conditional expectations.We are interested in analyzing the behavior of the network when we allow the interferencegraph to change over time. Next we introduce a dynamic version of the model, which we aregoing to study in this paper. Definition 1.2 ( Dynamic interefence graphs).
We say that the interference graph is dynamic when the edges appear and disappear according to a continuous-time flip process.Consider the dynamic bipartite interference graph G ( · ) = ( U (cid:116) V, E ( · )), where U (cid:116) V is the setof nodes, with | U | = M and | V | = N , and E ( t ) is the set of edges that are present betweennodes in U and nodes in V at time t . The number of edges | E ( · ) | changes over time and canvary from a minimum of 0 to a maximum of M N . We set G (0) = G , where G is the initialbipartite graph. We denote by G MN = ( U (cid:116) V, E MN ) the complete bipartite graph associatedto ( U, V ) and, for every edge e ∈ E MN , at time t we define the Bernoulli random variable Y e ( t ) as Y e ( t ) = (cid:40) , if e / ∈ E ( t ) , , if e ∈ E ( t ) . (1.12)In other words, Y e ( t ) = 0 if edge e is not present in the graph at time t , while Y e ( t ) = 1 if itis present. The joint edge activity state at time t is denoted by Y ( t ) = { Y e ( t ) } e ∈ E MN (1.13)and is an element of the state space Y = (cid:8) Y ∈ { , } U × V (cid:9) . (1.14)The degree of node v at time t is denoted by d v ( t ).We model the dynamics of the graph in the following way. If an edge is not present, thenit appears according to a Poisson clock with rate λ , independently of the other edges. If anedge is present, then it disappears according to a Poisson clock with rate λ , independently ofthe other edges. This is equivalent to having a system of i.i.d. Poisson clocks with rate λ on6he edges and letting an edge change its state every time its clock ticks. In order to study howthe edge dynamics affects the transition time, we consider Poisson clocks with rates λ = λ ( r )depending on the parameter r .Throughout the chapter we use the notation ≺ , (cid:31) to describe the asymptotic behaviorin the limit r → ∞ . More precisely, f ( r ) ≺ g ( r ) means that f ( r ) = o ( g ( r )) as r → ∞ , and f ( r ) (cid:31) g ( r ) means that g ( r ) = o ( f ( r )) as r → ∞ . Remark 1.3 ( Rates on the edges).
We may allow different rates for the edges to changetheir state. Denote by λ + ( r ) and λ − ( r ) the rates at which edges appear and disappear,respectively. If these are of the same order, then we are in a situation similar to them beingequal to λ ( r ). If λ + ( r ) → ∞ and λ − ( r ) ≺ λ + ( r ), then, with high probability as r → ∞ , intime o (1) the dynamics turns the initial graph into the complete bipartite graph with all theedges present. Analogously, if λ − ( r ) → ∞ and λ − ( r ) (cid:31) λ + ( r ), then, with high probabilityas r → ∞ , in time o (1) the dynamics turns the initial graph into the empty graph with allthe edges absent. Both these assumptions do not lead to interesting models. When λ + ( r )and λ − ( r ) are of different order and do not tend to infinity, we have an intermediate situationwhere at any time t an edge is either present with high probability as r → ∞ or absent withhigh probability as r → ∞ , but the total amount of time the edge has been absent or present,respectively, up to time t is not always negligible. Remark 1.4 ( Appearing edge).
When an edge disappears from the graph, the states of thenodes do not change. On the other hand, when an edge appears in the graph, it might appearbetween two active nodes. In this case, we assume that the active node in U deactivates, sincethe model does not allow two connected nodes to be simultaneously active. We could studyalternative models, where the active node in V deactivates or where the deactivating node ischosen uniformly at random (or with certain probabilities). It is obvious that these alternativemodels slow down the transition and lead to a possible multiple counting of the time it takesfor some nodes in V to activate. In [4] we analyzed the mean transition time and its law on the scale of its mean for arbitrarybipartite interference graphs. They key factor is the introduction of randomized algorithmthat takes as input the graph and gives as output the possible orders of activation of the nodesin V (admissible paths), together with their probabilities. At every step k = 1 , . . . , | V | , each ofthe n k nodes in V k of minimum degree ¯ d k activates with probability n k , where G k = ( U k , V k )is the induced subgraph of G in which the first k − V and their neighborsare removed. The set of all the admissible paths is denoted by A . For more details about thealgorithm we refer to [4, Section 2]. Theorem 1.5. [Mean transition time for arbitrary bipartite graphs [4, Theorem3.3]]
Consider the bipartite graph G = (( U, V ) , E ) with initial queue lengths Q . Suppose that (1.9) – (1.10) hold. Let A a be the event that the network follows the admissible path a ∈ A . (I) β ∈ (0 , d ∗ − ) : subcritical regime. The transition time satisfies E r [ T Q G | A a ] = (cid:88) ≤ k ≤ Nk : ¯ d k = d ∗ f k γ β ( d ∗ − U d ∗ B − ( d ∗ − r β ( d ∗ − [1 + o (1)] , r → ∞ , (1.15)7 ith f k = n k . (II) β = d ∗ − : critical regime. The transition time satisfies E r [ T Q G | A a ] = (cid:88) ≤ k ≤ Nk : ¯ d k = d ∗ f k γ ( k ) U d ∗ B − ( d ∗ − + ( c − ρ U ) r [1 + o (1)] , r → ∞ , (1.16) with f k = ¯ d k B − ( ¯ d k − + ( c − ρ U ) n k ¯ d k B − ( ¯ d k − + ( c − ρ U ) (1.17) and γ ( k ) U = γ U − ( c − ρ U ) (cid:88) ≤ i ≤ ki : ¯ d i = d ∗ f (cid:48) i , (1.18) where for a critical node v i the coefficient f (cid:48) i is defined in a recursive way as f (cid:48) i = 1 n i ¯ d i B − ( ¯ d i − + ( c − ρ U ) (cid:18) γ U − ( c − ρ U ) (cid:88) ≤ j ≤ i − j : ¯ d j = d ∗ f (cid:48) j (cid:19) > , (1.19) f (cid:48) = (cid:40) n d B − ( ¯ d − γ U , if ¯ d < d ∗ , n ¯ d B − ( ¯ d − +( c − ρ U ) γ U , if ¯ d = d ∗ . (1.20)(III) β ∈ ( d ∗ − , ∞ ) : supercritical regime. The transition time satisfies E r [ T Q G ] = γ U c − ρ U r [1 + o (1)] , r → ∞ . (1.21)Theorem 1.5 shows that, depending on the value of β the transition exhibits a subcriticalregime , a critical regime and a supercritical regime . Given a graph, the algorithm uniquelyidentifies the value d ∗ = max ≤ k ≤ N ¯ d k , (1.22)which determines the leading order of the mean transition time and together with β identifiesthe regime we are in. In the subcritical regime, we are able to compute the mean transitiontime along each admissible path. By averaging over all the admissible paths, we can thenrecover the mean transition time of the network and its law. In the same way, also in thecritical regime we can compute the mean transition time along each admissible path and thenrecover the mean transition time of the network. In the supercritical regime, the set of possiblepaths does not play a role, since the transition corresponds to nucleating a supercritical fork,hence the mean transition time is the time it takes on average for the queue lengths at nodesin U to hit zero.We were also able to identify the law of the transition time divided by its mean (in thesubcritical and supercritical regime). The latter is beyond the scope of the present paper, sinceunderstanding the effect of the edge dynamics is rather challenging. Our goal is to extend the8esults of Theorem 1.5 to dynamic bipartite graphs. We distinguish between different typesof dynamics and we see how they affect the mean transition time. Below, we state our mainresult on the mean transition time of networks with a dynamic bipartite interference graph.We denote by T Q G ( · ) the transition time of the dynamic graph G ( · ) conditional on the initialqueue lengths Q . Theorem 1.6. [Mean transition time for dynamic bipartite graphs]
Consider thedynamic bipartite graph G ( · ) = (( U, V ) , E ( · )) with the edge dynamics governed by λ ( r ) andinitial queue lengths Q . (FD) If λ ( r ) → ∞ , then the dynamics is fast and, with high probability as r → ∞ , thetransition time satisfies E u [ T Q G ( · ) ] (cid:16) λ ( r ) − = o (1) , r → ∞ . (1.23)(RD) If λ ( r ) = C ∈ (0 , ∞ ) , then the dynamics is regular and, with high probability as r → ∞ ,the transition time satisfies E u [ T Q G ( · ) ] (cid:16) λ ( r ) − = O (1) , r → ∞ . (1.24)(SD) If λ ( r ) → , then the dynamics is slow and the following cases occur. (SDc) If λ ( r ) (cid:23) r − (1 ∧ β ( d ∗ − , then the dynamics is competitive and, with high probabilityas r → ∞ , the transition time satisfies E u [ T Q G ( · ) ] (cid:16) λ ( r ) − , r → ∞ . (1.25) More precisely, let λ ( r ) = r − α with < α ≤ ∧ β ( d ∗ − , and let T U ( r ) be theaverage time it takes for the queue lengths at nodes in U to hit zero. (I) β ∈ (0 , d ∗ − ) : subcritical regime. With high probability as r → ∞ , E u [ T Q G ( · ) ] (cid:16) r α [1 + o (1)] , r → ∞ . (1.26)(II) β = d ∗ − : critical regime. With high probability as r → ∞ , E u [ T Q G ( · ) ] (cid:16) r α [1 + o (1)] , r → ∞ . (1.27) In particular, when α = 1 , with positive probability, E u [ T Q G ( · ) ] = T U ( r ) [1 + o (1)] , r → ∞ . (1.28)(III) β ∈ ( d ∗ − , ∞ ) : supercritical regime. When < α ≤ , with high probability as r → ∞ , E u [ T Q G ( · ) ] (cid:16) r α [1 + o (1)] , r → ∞ . (1.29) In particular, when α = 1 , with positive probability, E u [ T Q G ( · ) ] = T U ( r ) [1 + o (1)] , r → ∞ . (1.30) When α > , with high probability as r → ∞ , E u [ T Q G ( · ) ] = T U ( r ) [1 + o (1)] , r → ∞ . (1.31)9SDnc) If λ ( r ) ≺ r − (1 ∧ β ( d ∗ − , then the dynamics is non-competitive and, with high prob-ability as r → ∞ , the transition time satisfies Theorem 1.5. Note that the order of the mean transition time depends on the speed of the dynamics.When the dynamics is fast (FD), the edges quickly appear and disappear, reaching in time o (1) the state where nodes in V have no edges connecting them to U . Since nodes in V are aggressive, they eventually activate in time o (1). When the dynamics is regular (RD),the situation is similar, but it takes time O (1) to reach the state where all the edges aresimultaneously absent. When the dynamics is slow (SD), a node in V can also activate throughthe nucleation of its fork (recall Definition 1.1(7)). In the case of competitive dynamics (SDc),the relation between the speed of the dynamics and the aggressiveness of the nodes in U playsa key role, while in the case of non-competitive dynamics (SDnc), the network behaves as ifthe edges were fixed at the initial configuration and there were no dynamics. Note that, inthe cases of fast, regular and competitive dynamics, the order of the mean transition time isgiven by the reciprocal of the rate λ ( r ). The intutition behind the results is that a node in V can activate for two reasons. It canactivate when its neighbors are simultaneously inactive or when there are no edges connectingit to nodes in U . Interpolation between these two situations gives rise to different cases, whichmainly depend on the speed of the dynamics. In the case of competitive dynamics, we areable to distinguish between different behaviors for the mean transition time by analyzing thesubcritical, critical and supercritical regimes separately. To summarize, with high probabilityas r → ∞ , the order of activation of nodes in V follows one of the paths generated by thealgorithm until the edge dynamics of rate λ ( r ) becomes competitive. The competition beginson time scale λ ( r ) − , the time scale on which all the remaining nodes in V activate, if thereare any, and the transition occurs.In order to give precise asymptotics, including the pre-factor for the mean transition time,we must analyze a more complicated Markov process describing how the states of the nodes,the queue lengths and the states of the edges change over time. This is beyond the scope ofthe present paper, but in Section 4 we give an overview of the main challenges. Outline.
The remainder of the paper is organized as follows. In Section 2 we discuss the maineffects of the dynamics on the mean transition time and we explain how it can slow down orspeed up the activation of each node in V . In Section 3 we prove Theorem 1.6 by discussingthe different types of dynamics separately. In Section 4 we describe the graph evolution anddiscuss what needs to be considered in order to compute the pre-factor of the mean transitiontime. In Appendix A we consider a model where the activation rates are fixed and not queue-dependent. We adapt results from previous works in order to study how the dynamics affectsthe transition time. In this section we analyze the effects that different type of dynamics have on the mean tran-sition time of the network. 10 .1 Disconnection time
Recall that the nucleation time T Qv of the fork of a node v ∈ V given the state of the queues Q is the time it takes for its neighbors to become simultaneously inactive, so that v can activateas soon as its clock ticks. Due to the dynamics, a node v ∈ V does not necessarily activatethrough the nucleation of its fork, but it can also activate if at some point there are no edgesconnecting it to nodes in U . The dynamics, indeed, might sometimes bring the graph to aconfiguration where the degree of v is temporarily 0, so that v can activate as soon as its clockticks in o (1). Definition 2.1. [Disconnection time]
Given v ∈ V , we call disconnection time of v thetime it takes for v to be disconnected from U , i.e., to have all possible edges connecting it to U simultaneously absent. We denote the disconnection time of v by D Q v , where Q indicatesthe conditioning on the initial queue lengths.As introduced in Section 1.2, the dynamics affects the network by allowing the edges toappear and disappear according to a Poisson clock with rate λ ( r ). The alternation between thestates of each edge e ∈ E MN is described by an exponential random variable S e (cid:39) Exp( λ ( r ))with mean µ ( r ) = λ ( r ) − . Note that, with high probability as r → ∞ , S e takes values of theorder of its mean, i.e., S e (cid:16) µ ( r ). Indeed, if we pick x ≺ µ ( r ), thenlim r →∞ P ( S e ≤ x ) = lim r →∞ − e − λ ( r ) x = 0 , (2.1)and the same holds for x (cid:31) µ ( r ). In other words, if an edge is absent at time t , then, withhigh probability as r → ∞ , it will take an amount of time of order µ ( r ) for the Poisson clockto tick and for the edge to become present. Vice versa, if an edge is present at time t , then itwill take an amount of time of order µ ( r ) for the edge to become absent.The arbitrary bipartite initial configuration of the graph plays an important role in under-standing the transition time. Consider a node in v ∈ V of initial degree d v (0) = d >
0. Since | U | = M , there are M possible total edges connecting v to U . We construct a continuous-timeMarkov chain M where each state k represents the set of configurations of the M edges inwhich k edges are present and M − k edges are absent. State 0 corresponds to all edges beingabsent, state 1 corresponds to the M possible configurations with exactly one edge present,and so on (see Figure 1 below).0 1 2 · · · M − M ( M − λ ( M − λ λM λ ( M − λ λλ Figure 1:
The Markov chain M describing how the edge dynamics changes the degree of a node in V .It is a birth-death process with M transient states and one absorbing state. We consider state 0 as an absorbing state, since we are interested in computing the hittingtimes to state 0 starting from any other state. From state M we can only jump to state M − M present edges disappears, which happens with rate M λ . From each state0 < k < M we jump to the neighboring states also with rate
M λ . Indeed, as soon as theclock of one of the M possible edges ticks, we jump to the state k + 1 if the edge was absentand becomes present, while we jump to the state k − k to state k + 1 with probability M − kM , while we jump fromstate k to state k − kM .The transition rate matrix H of the Markov chain M is given by H = ··· M − M λ − M λ ( M − λ λ − M λ · · · · · · · · · · · · M − · · · − M λ λ M M λ − M λ (2.2)and can be written as H = (cid:18) S (cid:19) , (2.3)where S is an M × M matrix and S = − S M , where M represents the M -dimensionalcolumn vector with every element being 1. Let( a , a ) = ( a , a , . . . , a M ) (2.4)be the M + 1 dimensional row vector describing the probability of starting in one of the M + 1states. Since d v (0) = d , we have that the d -th entry of a equals 1 and all the other entriesequal 0. Computing the disconnection time of a node with initial degree d is equivalent tocomputing the hitting time of the Markov chain M to state 0 starting from state d . Lemma 2.2 ( Mean and law of the disconnection time).
Consider a node v ∈ V ofinitial degree d v (0) = d > , and let the edge dynamics be such that E u [ S e ] = µ ( r ) for each e ∈ E MN .(i) The disconnection time D Q v satisfies E u [ D Q v ] = C d µ ( r ) [1 + o (1)] , r → ∞ , (2.5) where ( C µ ( r ) , . . . , C M µ ( r )) is the solution of the linear system of equations x = M µ ( r ) M + M − M (cid:18) µ ( r ) M + x (cid:19) x = M (cid:18) µ ( r ) M + x (cid:19) + M − M (cid:18) µ ( r ) M + x (cid:19) · · · = · · · x M − = M − M (cid:18) µ ( r ) M + x M − (cid:19) + M (cid:18) µ ( r ) M + x M (cid:19) x M = µ ( r ) M + x M − . (2.6)12 ii) The law of the disconnection time D Q v follows a phase-type distribution PH( a , S ) andis given by lim r →∞ P u ( D Q v > x ) = a exp( Sx ) , x ∈ (0 , ∞ ) , (2.7) where a and S are as in (2.4) and (2.3) , respectively. In particular, the above probabilityequals the sum of the entries in d -th row of the matrix exp( Sx ) .Proof. We prove the two statements separately.(i) Consider the Markov chain M described above. We know that from each state k > M λ . The jump occurs exactly when the firstof the M possible edges changes its state. This corresponds to the minimum of M i.i.d.exponential random variables, which is known to follow an exponential distribution withmean µ ( r ) M . If v has initial degree d , then we start from state d . We denote by x k themean hitting times of state 0 starting from state k . The above system of equationsallows us to compute the mean disconnection time of v .Since, the system of equations is linear in µ ( r ) and in the variables x i ’s, its solution islinear in µ ( r ). Hence the mean disconnection time is of order µ ( r ).(ii) The disconnection time of a node v ∈ V of initial degree d > M starting from state d . The distribution of the hittingtime to the unique absorbing state, starting from any of the other finite transient states,is said to be phase-type and is denoted by PH( a , S ), with a and S as in (2.4) and (2.3),respectively.The distribution function of D Q v is given bylim r →∞ P u ( D Q v ≤ x ) = (cid:90) x P ( y ) dy = 1 − a exp( Sx ) , x ∈ (0 , ∞ ) , (2.8)where exp( · ) indicates the matrix exponential, and P ( z ) = a exp( Sz ) S , z ∈ (0 , ∞ ) , (2.9)with S as in (2.3). Since the vector a has its d -th entry equal to 1 and all the otherentries equal to 0, we have that the product a exp( Sx ) equals the sum of the entriesin the d -th row of the matrix exp( Sx ).Note that the results of Lemma 2.2 hold even without letting r → ∞ . Without loss of generality, we may consider interference graphs with no isolated nodes in V ,since after time o (1) we would be in such a scenario anyway. Lemma 2.3 ( Isolated nodes).
Nodes in V with initial degree 0 activate in time o (1) as r → ∞ . roof. Consider the situation where λ ( r ) ≺ g V (0), i.e., the dynamics is slower than the averagetime it takes for the activation clock of nodes in V to tick. Then a node v ∈ V with initialdegree 0 activate as soon as its clock ticks, hence in time o (1). Next, consider the situationwhere the dynamics is very fast, λ ( r ) (cid:31) g V (0). Then a node v ∈ V with initial degree 0might be blocked by some active neighbors in U by the time its activation clock ticks for thefirst time. Recall that | U | = M and note that there are 2 M possible configurations of edgesconnecting v to U . Each time the activation clock of v ticks, the probability of being in each ofthe possible configurations tends to the uniform probability 1 / M as r → ∞ . Therefore, aftera finite number of attempts, v eventually activates. Since each tick of the activation clock of v takes time o (1), v activates in time o (1). Lastly, consider the situation where λ ( r ) (cid:16) g V (0). Ifthe activation clock of a node v ∈ V with initial degree 0 ticks before any of its potential edgesappear, then v activates in time o (1). Otherwise, each subsequent activation attempt will notbe successful unless the edge configuration is such that v has no neighbors. In other words, v can activate only when the Markov chain describing how its degree changes over time is instate 0. In this case, v activates with a probability that at time t is given by g V ( t ) g V ( t )+ Mλ ( r ) > r → ∞ . Since λ ( r ) − = o (1), by using similar arguments as in the proof of Lemma 2.2,the time it takes for the Markov chain to return to state 0 when starting from state 0 is o (1). Hence, v has the chance to activate with positive probability every period of time o (1).Therefore, after a finite number of attempts, v eventually activates in time o (1).We call activation time of v ∈ V the time it takes for v to activate. Depending on thedynamics, this can be given either by its nucleation time T Q v or by its disconnection time D Q v . When the dynamics is fast enough, nodes in V eventually activate because their clockstick and no edges connect them to nodes in U . On the other hand, when the dynamics isparticularly slow, it is more likely for nodes in V to activate through the nucleation of itsfork, and the network tends to behave as if the edges were frozen at the initial configuration.In between these two scenarios the dynamics is more interesting and, depending on its speed,we distinguish between different behaviors. Proposition 2.4 below describes the competitionbetween the nucleation and the dynamics. Proposition 2.4 ( Nucleation vs. dynamics).
Let v ∈ V be the node of minimum degreeat time t = 0 , with d v (0) = d > .(i) If λ ( r ) (cid:31) r − (1 ∧ β ( d − , then, with high probability as r → ∞ , the activation time of v isgiven by its disconnection time, i.e., lim r →∞ P u ( D Q v < T Q v ) = 1 . (2.10) (ii) If λ ( r ) (cid:16) r − (1 ∧ β ( d − , then the activation time of v is given either by its nucleation timewith positive probability or by its disconnection time with positive probability.(iii) If λ ( r ) ≺ r − (1 ∧ β ( d − , then, with high probability as r → ∞ , the activation time of v isgiven by its nucleation time, i.e., lim r →∞ P u ( T Q v < D Q v ) = 1 . (2.11)14 roof. Recall that µ ( r ) = λ ( r ) − and that the disconnection time D Q v is given by a phase-typerandom variable with mean of order µ ( r ). Since phase-type random variables are constructedby convolutions of exponential random variables, we have that, with high probability as r →∞ , D Q v takes values of order µ ( r ). Recall also that, depending on the relation between β and d , the nucleation time T Q v is given by an exponential random variable with mean oforder r β ( d − , by a polynomial random variable with mean of order r , or by T U ( r ), whichis the average time it takes for the queue lengths at nodes in U to hit zero. Hence, withhigh probability as r → ∞ , T Q v takes values of order r ∧ β ( d − . It is therefore immediate todistinguish between the three cases.(i) Since µ ( r ) ≺ r ∧ β ( d − , with high probability as r → ∞ , v activates due to absence ofedges.(ii) Since µ ( r ) (cid:16) r ∧ β ( d − , there is a competition between the nucleation time T Q v andthe phase-type random variable D Q v . Depending on their parameters, each of them canoccur before the other with positive probability.(iii) Since µ ( r ) (cid:31) r ∧ β ( d − , with high probability as r → ∞ , v activates through the nucle-ation of its fork. In this section we prove Theorem 1.6 by analyzing the different types of dynamics separately.
Consider the fast dynamics (FD) where λ ( r ) → ∞ as r → ∞ . Proof of Theorem 1.6 ( F D ) . With high probability as r → ∞ , for each edge the random in-tervals between clock ticks are of order λ ( r ) − = o (1). By Lemma 2.2, the mean disconnectiontime of a node in V is of order λ ( r ) − . Moreover, by Proposition 2.4, with high probabilityas r → ∞ , each node activates due to absence of edges and not through the nucleation of itsfork, and hence it activates in a time of order λ ( r ) − . In conclusion, with high probability as r → ∞ , the transition time of G ( · ) with initial queue lengths Q satisfies E u [ T Q G ( · ) ] (cid:16) λ ( r ) − = o (1) , r → ∞ , (3.1)hence the claim is settled. Consider the regular dynamics (RD) where λ ( r ) = C ∈ (0 , ∞ ). Proof of Theorem 1.6 ( RD ) . With high probability as r → ∞ , for each edge the random in-tervals between clock ticks are of order λ ( r ) − = O (1). By Lemma 2.2, the mean disconnectiontime of a node in V is of order λ ( r ) − . Note that nodes in V of initial degree 1 can activate15ither because their only neighbor deactivates in O (1) or due to absence of edges with a meandisconnection time of order λ ( r ) − . Moreover, by Proposition 2.4, with high probability as r → ∞ , nodes in V of initial degree greater than 1 activate due to absence of edges in a timeof order λ ( r ) − . In conclusion, with high probability as r → ∞ , the transition time of G ( · )with initial queue lengths Q satisfies E u [ T Q G ( · ) ] (cid:16) λ ( r ) − = O (1) , r → ∞ , (3.2)hence the claim is settled. Consider the slow dynamics where λ ( r ) → r → ∞ with λ ( r ) ≺ r − (1 ∧ β ( d ∗ − , called thenon-competitive dynamics (SDnc). In this case the dynamics is so slow that it has no effecton the transition. Proof of Theorem 1.6 ( SDnc ) . The mean disconnection time of any node in V is of orderlarger than r ∧ β ( d ∗ − . Hence each node in V activates through the nucleation of its fork,which is at most of order r ∧ β ( d ∗ − . The dynamics is very slow, almost frozen, and so it doesnot affect the nucleation of the forks. Hence, with high probability as r → ∞ , the transitiontime of G ( · ) with initial queue lengths Q satisfies Theorem 1.5 and the network behaves asif there were no dynamics. Hence the claim is settled. Consider the slow dynamics (SD) where λ ( r ) → r → ∞ with λ ( r ) = r − α , with 0 < α ≤ ∧ β ( d ∗ − V canoccur both because of the absence of their edges and because of the nucleation of their forks. Proof of Theorem 1.6 ( SDc ) . Denote by ˆ d the largest integer such that β ( ˆ d − < α . Recallthat A denotes the set of admissible paths and fix a path a ∈ A . Consider the sequence ofactivating nodes along the path a up to the step in which the degree is larger than ˆ d . Saythat at step k we have ¯ d k > ˆ d . Consider only the first k − A a ( α )the event that the network follows the path a ∈ A until time scale r α . On time scale r α thedynamics starts competing with the nucleation, and the order of activation of the remainingnodes described by the algorithm is not preserved anymore. In other words, the order ofactivation of nodes in V follows the order of activation of the path a only for the first k − k − r ∧ β ( ˆ d − . Hence, by Proposition 2.4, with high probability as r → ∞ , the activation timeof these nodes is given by their nucleation time. We apply Proposition 2.4 to each iterationof the graph, each time by considering a node with minimum degree ¯ d j for j = 1 , . . . , k − r α .We treat the subcritical, critical and supercritical regimes separately.(I) β ∈ (0 , d ∗ − ): subcritical regime. We have 0 < α ≤ β ( d ∗ − <
1. The activationtime of the next activating node is of order r α . It cannot be of smaller order since atstep k we have ¯ d k > ˆ d by construction. It cannot be of higher order either since the16isconnection time of any of the remaining nodes is of order r α . After this activation,there might be nodes whose degree has decreased and whose nucleation time is of smallerorder. When we sum the mean activation times of the nodes in V to compute the meantransition time, we see that these nodes will not contribute significantly as r → ∞ . Allthe remaining nodes are likely to activate in any possible order, but none of them willhave an activation time of order larger than r α . To know how many nodes contributeto the transition time with an activation time of order r α , we need to have more controlon how the degrees of the nodes evolve over time. To conclude, the order of activationof nodes in V follows the path a as long as the nucleation times associated to thenodes are of order smaller than r α . After that, the remaining nodes can activate withpositive probability in any order with an activation time of order at most r α . Hence,the transition time conditional on the event A a ( α ) satisfies E u [ T Q G ( · ) | A a ( α )] (cid:16) r α [1 + o (1)] , r → ∞ , (3.3)and we get E u [ T Q G ( · ) ] (cid:16) r α [1 + o (1)] , r → ∞ . (3.4)(II) β = d ∗ − : critical regime. For 0 < α <
1, the situation is the same as in the subcriticalregime described above. For α = 1, the activation time of the next activating node isof order r . After this activation, all the remaining nodes are likely to activate in anypossible order, but none of them will have an activation time of order larger than r .The order of activation of nodes in V follows the path a as long as the nucleation timesassociated to the nodes are of order smaller than r . After that, the remaining nodes canactivate with positive probability in any order with an activation time of order at most r . Hence, the transition time conditional on the event A a ( α ) satisfies E u [ T Q G ( · ) | A a ( α )] (cid:16) r [1 + o (1)] , r → ∞ , (3.5)and we get E u [ T Q G ( · ) ] (cid:16) r [1 + o (1)] , r → ∞ . (3.6)Note that if any of the nodes has an activation time of order r but larger than T U ( r ),then the transition time conditional on the event A a ( α ) is the time it takes for the queuelengths at nodes in U to hit zero, which satisfies E u [ T Q G ( · ) | A a ( α )] = T U ( r ) [1 + o (1)] , r → ∞ . (3.7)Hence, E u [ T Q G ( · ) ] = T U ( r ) [1 + o (1)] , r → ∞ . (3.8)(III) β ∈ ( d ∗ − , ∞ ): supercritical regime. For 0 < α <
1, the situation is the same as in thesubcritical regime described above. For α = 1, the situation is the same as in the criticalregime described above. For α >
1, the transition time conditional on the event A a ( α )is the time it takes for the queue lengths at nodes in U to hit zero, which satisfies E u [ T Q G ( · ) | A a ( α )] = T U ( r ) [1 + o (1)] , r → ∞ . (3.9)Hence, E u [ T Q G ( · ) ] = T U ( r ) [1 + o (1)] , r → ∞ . (3.10)17ote that the order of the transition time does not depend on the path along which wecompute it. The algorithm generates all possible activation paths of the nodes nucleatingbefore time scale λ ( r ) − = r α . The remaining nodes can activate in any order depending onthe dynamics. To compute the pre-factor of the mean transition time along these paths, weneed to analyze in detail the Markov process describing the graph evolution, in particular, thedegrees of the nodes changing over time. Our methods do not capture this detail and we areonly able to state a result for the leading order term. In this section we discuss the Markov process describing the graph evolution under the dy-namics. Control on this process is the key to obtaining a more precise asymptotics for themean transition time of the network.
Consider a dynamics with rate λ ( r ) = r − α . We have seen in Proposition 2.4 that each nodein V whose nucleation time is of smaller order than r α activates through the nucleation of itsfork. On time scale r α the dynamics starts competing with the nucleation and the order ofactivation of the remaining nodes described by the algorithm is not preserved anymore. Notethat the algorithm updates the graph at each iteration in order to keep track of the degreeof the remaining nodes after each activation. When introducing the dynamics on the edges,we need information about the states of the nodes and the edges in the graph. We assumethat the algorithm does not update the graph at each iteration anymore, but we focus on thenumber of active neighbors each node has. Definition 4.1. [Active degree]
We define the active degree of a node as the number of itsactive neighbours. For u ∈ U , the active degree at time t is given by˜ d u ( t ) = |{ v ∈ V : uv ∈ E ( t ) , X v ( t ) = 1 }| . (4.1)Analogously, for v ∈ V , the active degree at time t is given by˜ d v ( t ) = |{ u ∈ U : uv ∈ E ( t ) , X u ( t ) = 1 }| . (4.2)Note that for a node to activate, its active degree must be 0. It is immediate to see that theactive degree of a node cannot exceed its degree, i.e., for any u ∈ U and v ∈ V ˜ d u ( t ) ≤ d u ( t ) and ˜ d v ( t ) ≤ d v ( t ) . (4.3)The main challenge in describing the graph evolution is that any of the remaining nodescould activate next with positive probability. The activation of a node due to absence ofedges is captured by the scenario in which its active degree hits 0. The activation of a nodethrough the nucleation of its fork depends on the aggressiveness of the activation rates andon the number of active neighbors. Both types of activation are determined by the degreeevolution. Assume, for example, that an edge between two active nodes appears. By our18odel assumptions (see Remark 1.4), the node in U deactivates, implying that the activedegrees of its neighbors in V decrease by 1. If the mean nucleation time of the new fork ofone of the neighbors is of order less than or equal to r α , then this neighbor will be more likelyto activate through the nucleation of its fork. The degree evolution induced by the dynamicsaffects both the disconnection and the nucleation times of the nodes.The node activity process ( X ( t ) , Q ( t )) t ≥ and the edge activity process ( Y ( t )) t ≥ form acontinuous-time Markov process on X × R ≥ × Y (4.4)that describes the evolution of the graph under the effect of the dynamics. We refer to thisprocess as the graph evolution process . Note that if we know which nodes are active andwhich edges are present, then we can recover the degree and the active degree of each node inthe graph. Hence, understanding the graph evolution process is crucial to describe how thedegrees of the nodes change over time and how nodes activate. Consider a feasible state where some nodes are active and some edges are present. By feasiblewe mean that it respects the constraints given by the edges, for which two connected nodescannot be active simulteneously. Recall that | U | = M, | V | = N and | E MN | = M N . Hence, anarbitrary feasible state at time t has h active nodes in U with h = 0 , . . . , M , k active nodesin V with k = 0 , . . . , N , and l present edges with l = 0 , . . . , M N . Consequently, there are M − h inactive nodes in U , N − k inactive nodes in V , and M N − l absent edges. Note thatthe initial state u is described by h = M , k = 0 and l = | E (0) | , while the transition occurs assoon as state v is reached, for which k = N . Clock ticks.
The graph evolution is governed by different Poisson clocks ticking at variousrates: the activation clocks, the deactivation clocks and the edge clocks. We analyze how thenetwork evolves each time one of these clock ticks. Moreover, note that the queue lengths,hence the input process (recall Definition 1.1(3)), also play a role, since the activation ratesdepend on them. • The activation clock of a node u ∈ U ticks at rate g U ( Q u ( t )) at time t . The probabilityof this clock being the first one to tick is given by g U ( Q u ( t )) Z , (4.5)with Z = M − h (cid:88) i =1 g U ( Q i ( t )) + N − k (cid:88) j =1 g V ( Q j ( t )) + h + k + M N λ ( r ) . (4.6)The tick has two possible effects on the network. If the neighbors of u are all inactive,then u activates and the active degrees of all its neighbors increase by 1. If there is atleast one active neighbor of u , then the activation attempt fails and nothing happens.19 The deactivation clock of a node u ∈ U ticks at rate 1. The probability of this clockbeing the first one to tick is given by 1 Z . (4.7)Node u deactivates and the active degrees of all its neighbors decrease by 1. • The activation clock of a node v ∈ V ticks at rate g V ( Q v ( t )) at time t . The probabilityof this clock being the first one to tick is given by g V ( Q v ( t )) Z . (4.8)The tick has two possible effects on the network. If the neighbors of v are all inactive,then v activates and the active degrees of all its neighbors increase by 1. If there is atleast one active neighbor of v , then the activation attempt fails and nothing happens. • The deactivation clock of a node v ∈ V ticks at rate 1. The probability of this clockbeing the first one to tick is given by 1 Z . (4.9)Node v deactivates and the active degrees of all its neighbors decrease by 1. • The activation clock of an edge e ∈ E MN ticks at rate λ ( r ). The probability of this clockbeing the first one to tick is given by λ ( r ) Z . (4.10)Depending on which edge appears or disappears and on the nodes involved, the tick hasdifferent effects on the network, which are described below.
Edge appearing and disappearing.
If we know the number of active nodes in U and V ,then we can compute the probabilities of each of the following scenarios with simple combi-natorial arguments. There are four possible scenarios in which an edge can appear.( ◦ ◦ ) When an edge between two inactive nodes appears, their degrees increase by 1.( ◦ • ) When an edge between an inactive node in U and an active node in V appears, theactive degree of the node in U increases by 1 and the degree of the node in V increasesby 1.( • ◦ ) When an edge between an active node in U and an inactive node in V appears, thedegree of the node in U increases by 1 and the active degree of the node in V increasesby 1.( • • ) When an edge between two active nodes appears, the node in U deactivates, its activedegree increases by 1, the active degrees of all its neighbors in V decrease by 1 and thedegree of the node in V increases by 1. 20n a similar fashion, there are three possible scenarios in which an edge can disappear. Recallthat there cannot be an edge between two active nodes.( ◦ ◦ ) When an edge between two inactive nodes disappears, their degrees decrease by 1.( ◦ • ) When an edge between an inactive node in U and an active node in V disappears, theactive degree of the node in U decreases by 1 and the degree of the node in V decreasesby 1.( • ◦ ) When an edge between an active node in U and an inactive node in V disappears, thedegree of the node in U decreases by 1 and the active degree of the node in V decreasesby 1.The transition time is related to the graph evolution process, since the activation timesof the nodes in V depend on the activation rates, the speed of the dynamics and the degreeevolution. The complicated nature of the process prevents us from deriving an explicit formulafor the pre-factor of the mean transition time, which would require a better control on theprecise asymptotics of each activation. A Appendix: a model with fixed activation rates
We have seen how the dynamics influences the mean transition time of wireless random-accessmodels where the activation rates depend on the current queue lengths at the nodes. Themodel is quite challenging and deals with two levels of complexity, namely, the queue-basedactivation rates and the edge dynamics. Not much is known in the literature for random-accessprotocols with dynamic interference graph, even for models with fixed activation rates . In thissection we adapt the theory built in [3] and [4] to study the effects of the dynamics on thesetype of models. Assume that the activation rates are of the form r i ( t ) = (cid:26) r β , if i ∈ U,r β (cid:48) , if i ∈ V, (A.1)with β, β (cid:48) ∈ (0 , ∞ ) and β (cid:48) > β + 1. We recall that we are interested in the transition timeasymptotics as r → ∞ .We start by adapting the result for complete bipartite graphs from [3, Theorem 1.7] to themodel with fixed activation rates. The following theorem is consistent with [9, Example 4.1]. Theorem A.1 ( Complete bipartite graphs with fixed activation rates).
Consider thecomplete bipartite graph G = (( U, V ) , E ) with initial queue lengths Q as in (1.6) . Supposethat (A.1) holds. (I) β ∈ (0 , | U |− ) : subcritical regime. The transition time satisfies E u [ T Q G ] = 1 | U | r β ( | U |− [1 + o (1)] , r → ∞ . (A.2)(II) β = | U |− : critical regime. The transition time satisfies E u [ T Q G ] = 1 | U | r [1 + o (1)] , r → ∞ . (A.3)21III) β ∈ ( | U |− , ∞ ) : supercritical regime. The transition time satisfies E u [ T Q G ] = γ U c − ρ U r [1 + o (1)] , r → ∞ . (A.4) Proof.
It follows from [3, Sections 4.1-4.2]. We compute the critical time scale and the meantransition time using fixed activation rates instead of time depending ones. In both the criticaland subcritical regimes, the pre-factor turns out to be | U | and the law is exponential. In thecritical regime, we know that the queue lengths decrease significantly after a time of order r .However, this does not affect the transition time, since now the activation rates do not dependon the queue lengths. In the supercritical regime, we still have the same behavior as in themodel with queue-dependent activation rates. Indeed, when the queue lengths at nodes in U hit zero, the nodes in U deactivate by assumption and the transition occurs.Next, we state a result for arbitrary bipartite graphs with fixed activation rates (analogousof Theorem 1.5). Note that the algorithm still plays a crucial role in determining the meantransition time. Theorem A.2 ( Arbitrary bipartite graphs with fixed activation rates).
Consider thebipartite graph G = (( U, V ) , E ) with initial queue lengths Q as in (1.6) . Suppose that (A.1) holds. Let A a be the event that the network follows the admissible path a ∈ A . (I) β ∈ (0 , d ∗ − ) : subcritical regime. The transition time satisfies E u [ T Q G | A a ] = (cid:88) ≤ k ≤ Nk : ¯ d k = d ∗ n k d ∗ r β ( d ∗ − [1 + o (1)] , r → ∞ . (A.5)(II) β = d ∗ − : critical regime. Then the transition time satisfies E u [ T Q G | A a ] = (cid:88) ≤ k ≤ Nk : ¯ d k = d ∗ n k d ∗ r [1 + o (1)] , r → ∞ . (A.6) The above result holds as long as the pre-factor is below the value γ U c − ρ U , which correspondsto the time it takes for the queue lengths at nodes in U to hit zero. Otherwise, thesupercritical regime applies. (III) β ∈ ( d ∗ − , ∞ ) : supercritical regime. The transition time satisfies E u [ T Q G ] = γ U c − ρ U r [1 + o (1)] , r → ∞ . (A.7) Proof.
The claims follow from Theorem A.1 and from the analysis of the algorithm and thenext nucleation times in [4, Sections 2, 4.2]. We derive the mean transition time along thepaths generated by the algorithm by computing the next nucleation times at each step. In thesubcritical regime, the nucleation times of nodes in V are all exponentially distributed andindependent of each other. Indeed, the activation rates are the same, independently of thequeue lengths decreasing over time. At each step k , the next nucleation time is the minimum22f n k i.i.d. exponential random variables, and hence its mean exhibits the term f k = n k in thepre-factor. In the critical regime, the pre-factor of the mean transition time along each pathmust be below the value γ U c − ρ U , otherwise the supercritical regime applies and the transitionoccurs because the queue lengths at nodes in U hit 0. If we assume that γ U c − ρ U >
1, thenthe nucleation of a fork occurs before the queue lengths at nodes in U hit zero. We are ableto derive the law of the transition time along each path for both the subcritical and criticalregimes. Both are described by convolutions of the exponential laws of the next nucleationtimes of the activating nodes in V . In the supercritical regime, we have the same behavior asin the model with queue-dependent activation rates.Finally, we show that the results from Theorem 1.6 also hold when we consider a dynamicbipartite graph with fixed activation rates. We are able to compute the order of the meantransition time, while the pre-factor still depends on the graph evolution described in Section 4. Theorem A.3 ( Dynamic bipartite graphs with fixed activation rates).
Consider thedynamic bipartite graph G ( · ) = (( U, V ) , E ( · )) with the edge dynamics governed by λ ( r ) andinitial queue lengths Q . Suppose that (A.1) holds. Then the results of Theorem 1.6 hold.Proof. The claim follows from Theorem A.2 and from the intuition behind Proposition 2.4.The order of the mean transition time in the model with fixed activation rates is the sameas in the model with queue-dependent activation rates. The dynamics competes with thenucleations of the nodes in the same way, depending on its speed. The different type ofdynamics, fast, regular and slow, lead to the same results as in Theorem 1.6.
References [1] T. Bonald, S. C. Borst, A. Proutiere.
How mobility impacts the flow-level performanceof wireless data systems . In: Proceedings of the IEEE INFOCOM 2004 Conference,volume 3, 1872-1881, 2004[2] T. Bonald, S. C. Borst, N. Hegde, M. Jonckheere, A. Proutiere.
Flow-level performanceand capacity of wireless networks with user mobility , Queueing Systems 63(1-4): 131-164, 2009.[3] S. Borst, F. den Hollander, F. R. Nardi, M. Sfragara.
Transition time asymptotics ofqueue-based activation protocols in random-access networks , Stochastic Processes andTheir Applications 130(12), 7483–7517, 2020[4] S. Borst, F. den Hollander, F. R. Nardi, M. Sfragara.
Wireless random-access networkswith bipartite interference graphs , [arXiv:2001.02841], 2020.[5] S. C. Borst, N. Hegde, A. Proutiere.
Mobility-driven scheduling in wireless networks .In:Proceedings of the IEEE INFOCOM 2009 Conference, 1260–1268, 2009.[6] S. C. Borst, A. Proutiere, N. Hegde.
Capacity of wireless data networks with intra-and inter-cell mobility . In: Proceedings of the IEEE INFOCOM 2006 Conference,1058–1069, 2006. 237] S. C. Borst, F. Simatos.
A stochastic network with mobile users in heavy traffic , Queue-ing Systems 74(1): 1–40, 2013.[8] M. Grossglauser, D. N. C. Tse.
Mobility increases the capacity of ad hoc wirelessnetworks , IEEE/ACM Transactions on Networking 10(4): 477-486, 2002.[9] F. den Hollander, F. R. Nardi, S. Taati.
Metastability of hard-core dynamics on bipar-tite graphs , Electronic Journal of Probability 23(97): 1–65, 2018.[10] T. S. Rappaport. Wireless Communications: Principles and Practice, second edition.Prentice Hall, 2002.[11] F. Simatos, A. Simonian.