Robust And Optimal Opportunistic Scheduling For Downlink 2-Flow Network Coding With Varying Channel Quality and Rate Adaptation
aa r X i v : . [ c s . N I] O c t Robust And Optimal Opportunistic Scheduling ForDownlink 2-Flow Network Coding With VaryingChannel Quality and Rate Adaptation
Wei-Cheng Kuo, Chih-Chun Wang { wkuo, chihw } @purdue.eduSchool of Electrical and Computer Engineering, Purdue University, USA Abstract —This paper considers the downlink traffic from abase station to two different clients. When assuming infinitebacklog, it is known that inter-session network coding (INC)can significantly increase the throughput of each flow. However,the corresponding scheduling solution (when assuming dynamicarrivals instead and requiring bounded delay) is still nascent.For the 2-flow downlink scenario, we propose the first op-portunistic INC + scheduling solution that is provably optimalfor time-varying channels, i.e., the corresponding stability regionmatches the optimal Shannon capacity. Specifically, we first in-troduce a new binary INC operation, which is distinctly differentfrom the traditional wisdom of XORing two overheard packets.We then develop a queue-length-based scheduling scheme, which,with the help of the new INC operation, can robustly andoptimally adapt to time-varying channel quality. We then showthat the proposed algorithm can be easily extended for rateadaptation and it again robustly achieves the optimal throughput.A byproduct of our results is a scheduling scheme for stochasticprocessing networks (SPNs) with random departure , which re-laxes the assumption of deterministic departure in the existingresults. The new SPN scheduler could thus further broaden theapplications of SPN scheduling to other real-world scenarios.
I. I
NTRODUCTION
Since 2000, NC has emerged as a promising technique incommunication networks. The seminal work by [1] shows lin-ear intra-session NC achieves the min-cut/max-flow capacityof single-session multi-cast networks. The natural connectionof intra-session NC and the maximum flow allows the use ofback-pressure (BP) algorithms to stabilize intra-session NCtraffic, see [2] and the references therein.However, when there are multiple coexisting sessions, thebenefits of inter-session network coding (INC) are far fromfully utilized. The COPE architecture [3] demonstrated that asimple INC scheme can provide 40%–200% throughput im-provement when compared to the existing TCP/IP architecturein a testbed environment. Several analytical attempts have beenmade to characterize the INC capacity (or provably achievablethroughput) for various small network topologies [4]–[7].However, unlike the case of intra-session NC, there is nodirect analogy from INC to the commodity flow. As a result,it is much more challenging to derive BP-based schedulingfor INC traffic. We use the following example to illustratethis point. Consider a single source s and two destinations d This work was supported in parts by NSF grants CCF-0845968, CNS-0905331, and CCF-1422997. Part of the results were presented in the 2014INFOCOM. (a) INC using only 3 operations (b) INC using only 5 operationsFig. 1. The virtual networks of two INC schemes. and d . Source s would like to send to d the X i packets, i = 1 , , · · · ; and send to d the Y j packets, j = 1 , , · · · .The simplest INC scheme consists of three operations. OP1:Send uncodedly those X i that have not been heard by any of { d , d } . OP2: Send uncodedly those Y j that have not beenheard by any of { d , d } . OP3: Send a linear sum [ X i + Y j ] where X i has been overheard by d but not by d and Y j has been overheard by d but not by d . For future reference,we denote OP1 to OP3 by N ON -C ODING -1, N ON -C ODING -2,and C
LASSIC -XOR, respectively.OP1 to OP3 can also be represented by the virtual network(vr-network) in Fig. 1(a). Namely, any newly arrived X i and Y j virtual packets (vr-packets) that have not been heard byany of { d , d } are stored in queues Q ∅ and Q ∅ , respectively.The superscript k ∈ { , } indicates that the queue is for thesession- k packets. The subscript ∅ indicates that those packetshave not been heard by any of { d , d } . N ON -C ODING -1 thentakes one X i vr-packet from Q ∅ and send it uncodedly. Ifsuch X i is heard by d , then the vr-packet leaves the vr-network, which is described by the dotted arrow emanatingfrom the N ON -C ODING -1 block. If X i is overheard by d but not d , then we place it in queue Q { } , the queue for theoverheard session-1 packets. N ON -C ODING -2 in Fig. 1(a) canbe interpreted symmetrically. C
LASSIC -XOR operation takesan X i from Q { } and a Y j from Q { } and sends [ X i + Y j ] .If d receives [ X i + Y j ] , then X i is removed from Q { } andleaves the vr-network. If d receives [ X i + Y j ] , then Y j isremoved from Q { } and leaves the vr-network. The transitionprobability (of the edges) of the vr-network can be computedby analyzing the corresponding random reception events whentransmitting the packet physically. We often use “virtual packets” to refer to the packets (jobs) inside thevr-network.
Fig. 2. The two components of optimal dynamic INC design.
It is known [8] that with dynamic packet arrivals, any INCscheme that (i) uses only these three operations and (ii) attainsbounded decoding delay with arrival rates ( R , R ) can alwaysbe converted to a scheduling solution that stabilizes the vr-network with arrival rates ( R , R ) , and vice versa. The INCdesign problem is thus converted to a scheduling problemon the vr-network.
To distinguish the above INC design fordynamical arrivals (the concept of the stability region) fromthe INC design assuming infinite backlog and decoding delay(the concept of the Shannon capacity), we term the former the dynamic INC design problem and the latter the block-codeINC design problem.The above vr-network representation also allows us todivide the optimal dynamic INC design problem into solvingthe following two major challenges separately.
Challenge 1 :The example in Fig. 1(a) focuses on dynamic INC schemesusing only 3 possible operations. Obviously, the more INCoperations one can choose from, the larger the degree of designfreedom, and the higher the achievable throughput.
The goalis thus to find a (small) finite set of INC operations that canprovably maximize the “block-code” achievable throughput . Challenge 2 : Suppose that we have found a set of INCoperations that achieves the block-code capacity. However, itdoes not mean that such a set of INC operations always leadsto a dynamic INC design since we still need to consider thedelay/stability requirements. Specifically, once the optimal setof INC operations is decided, we can derive the correspondingvr-network.
The goal is then to devise a stabilizing schedulingpolicy for the vr-network, which leads to an equivalent repre-sentation of the optimal dynamic INC solution.
See Fig. 2 forthe illustration of these two separate tasks.Both tasks turn out to be highly non-trivial and optimaldynamic INC solution [4], [8], [9] has been designed only forthe scenario of fixed channel quality. Specifically, [10] answersChallenge 1 and shows that for fixed channel quality, the 3INC operations in Fig. 1(a) plus 2 additional D
EGENERATE -XOR operations, see Fig. 1(b) and Section II-B1, can achievethe block-code INC capacity. One difficulty of resolving Chal-lenge 2 is that an INC operation may involve multiple queuessimultaneously, e.g., C
LASSIC -XOR can only be scheduledwhen both Q { } and Q { } are non-empty. This is in sharpcontrast with the traditional BP solutions in which each queuecan act independently. For the vr-network in Fig. 1(b), [4]circumvents this problem by designing a fixed priority rule To be more precise, a critical assumption in [Section II C.1 [11]] is that iftwo queues Q and Q can be activated at the same time, then we can alsochoose to activate exactly one of the queues if desired. This is unfortunatelynot the case in the vr-network. E.g., C LASSIC -XOR activates both Q { } and Q { } but no coding operation in Fig. 1(a) activates only one of Q { } and Q { } . that gives strict precedence to the C LASSIC -XOR operation.Alternatively, [8] derives a BP scheduling scheme by noticingthat the vr-network in Fig. 1(b) can be decoupled into twovr-subnetworks (one for each data session) so that the queuesin each of the vr-subnetworks can be activated independentlyand the traditional BP results follow.However, the channel quality varies over time for practicalwireless downlink scenarios. Therefore, one should oppor-tunistically choose the most favorable users as receivers,the so-called opportunistic scheduling technique. Nonetheless,recently [12] shows that when allowing opportunistic cod-ing+scheduling for time-varying channels, the 5 operationsin Fig. 1(b) no longer achieve the block-code capacity. Theexisting dynamic INC design in [4], [8] are thus strictlysuboptimal for time-varying channels since they are based ona suboptimal set of INC operations (recall Fig. 2).In this work, we propose a new optimal dynamic INCdesign for 2-flow downlink traffic with time-varying packeterasure channels. Our detailed contributions are summarizedas follows.
Contribution 1:
We introduce a new pair of INC operationssuch that (i) The underlying concept is distinctly different fromthe traditional wisdom of XORing two overheard packets; (ii)The overall scheme uses only the ultra-low-complexity binaryXOR operation; and (iii) The new set of INC operations isguaranteed to achieve the block-code-based Shannon capacity.
Contribution 2:
The introduction of new INC operationsleads to a new vr-network that is different from Fig. 1(b)and for which the existing “ vr-network decoupling + BP ”approach in [8] no longer holds. To answer Challenge 2 ofthe optimal dynamic INC design, we generalize the resultsof Stochastic Processing Networks (SPNs) [13], [14] andsuccessfully apply it to the new vr-network. The end resultis an opportunistic, dynamic INC solution that is completely queue-length-based and can robustly adapt to time-varyingchannels while achieving the largest possible stability region. Contribution 3:
The proposed solution can also be readilygeneralized for rate-adaptation. Through numerical experi-ments, we have shown that a simple extension of the proposedscheme can opportunistically and optimally choose the orderof modulation and the rate of the error correcting codesused for each packet transmission while achieving the optimalstability region, i.e., equal to the Shannon capacity.
Contribution 4:
A byproduct of our results is a schedulingscheme for SPNs with random departure . The new resultsrelax the previous assumption of deterministic departure , amajor limitation of the existing SPN model, by consideringstochastic packet departure behavior. The new schedulingsolution could thus further broaden the applications of SPNscheduling to other real-world scenarios.The rest of the paper is organized as follows. Section IIdiscusses the existing results on INC design and on SPNscheduling. Sections III and IV propose a new INC operationand a new SPN scheduling solution, respectively. Section Velaborates how to combine the new INC operation and the newSPN scheduling to derive the optimal dynamic INC solution.Section VI contains the simulation results and Section VIIconcludes the paper. Most of the proofs are left in the
Fig. 3. The time-varying broadcast packet erasure channel. appendices to improve the readability of the main findings.II. P
ROBLEM F ORMULATION AND E XISTING R ESULTS
In this section, we will introduce the problem formulationand then discuss the latest results on the block-code LNCliterature (related to solving Challenge 1) and on the SPNscheduling work (related to solving Challenge 2).
A. Problem Formulation — The Broadcast Erasure Channel
We model the 1-base-station/2-client downlink traffic as abroadcast packet erasure channel. See Fig. 3 for illustration.The detailed model description is as follows. Consider thefollowing slotted transmission system.
Dynamic Arrival:
In the beginning of every time t , there are A ( t ) session-1 packets and A ( t ) session-2 packets arrivingat source s . We assume that A ( t ) and A ( t ) are i.i.d. integer-valued random variables with mean ( E { A ( t ) } , E { A ( t ) } ) =( R , R ) and bounded support. Recall that X i and Y j , i, j ∈ N , denote the session-1 and session-2 packets, respectively. Time-Varying Channel:
We model the time-varying channelquality by a random process cq ( t ) , which, as will be elaboratedshortly after, decides the reception probability of the broadcastpacket erasure channel. In our stability proofs, we assume cq ( t ) is i.i.d. On the other hand, our numerical experimentsshow that the proposed scheme achieves the optimal stabilityregion for any ergodic cq ( t ) , say cq ( t ) being periodic.Let CQ denote the support of cq ( t ) and we assume | CQ | isfinite. For any c ∈ CQ , we use f c to denote the expected/long-term frequency of cq ( t ) = c . Without loss of generality,assume f c > for all c ∈ CQ . Obviously P c ∈ CQ f c = 1 since the total frequency is 1. Broadcast Packet Erasure Channel:
For each time slot t ,source s can transmit one packet, which will be received by arandom subset of destinations { d , d } . Specifically, there are4 possible reception status { d d , d d , d d , d d } , e.g., thereception status rcpt = d d means that the packet is receivedby d but not d . The reception status probabilities can bedescribed jointly by a vector ~p ∆ = ( p d d , p d d , p d d , p d d ) .For example, ~p = (0 , . , . , means that every time wetransmit a packet, with 0.5 probability it will be received by d only and with 0.5 probability it will be received by d only. It will never be received by d and d simultaneously.In contrast, if we have ~p = (0 , , , , then it means thatthe packet is always received by d and d simultaneously.Since our model allows arbitrary joint probability vector ~p , itcaptures the scenarios in which the erasure events of d and d are dependent, e.g., when the erasures at d and d arecaused by a common (random) external interference source. Opportunistic INC:
Since the reception probability is de-cided by the channel quality, we write ~p ( cq ( t )) as a functionof cq ( t ) at time t . In the beginning of time t , we assume that s is aware of the channel quality cq ( t ) (and thus knows ~p ( cq ( t )) )so that s can opportunistically decide how to encode the packetfor the current time slot. See Fig. 3. This opportunistic settingthus models the use of cognitive radio at source s . ACKnowledgement:
In the end of time t , both d and d willreport back to s whether they have received the transmittedpacket or not. This models the use of ACK. B. Existing Results on Block INC Design
References [12], [15] focus on the above setting but considerthe infinite backlog block-code design instead of dynamicarrivals. Two findings of [12], [15] are summarized here.
1) The 5 INC operations in Fig. 1(b) are no longer optimalfor time-varying channels:
In Section I, we have detailed3 INC operations: N ON -C ODING -1, N ON -C ODING -2, andC
LASSIC -XOR. Two additional INC operations are introducedin [10]: D
EGENERATE -XOR-1 and D
EGENERATE -XOR-2 asillustrated in Fig. 1(b). Specifically, D
EGENERATE -XOR-1 isdesigned to handle the degenerate case in which Q { } is nonempty but Q { } = ∅ . Namely, there is at least one X i packetoverheard by d but there is no Y j packet overheard by d .Not having such Y j implies that one cannot send [ X i + Y j ] (the C LASSIC -XOR operation). An alternative is thus to sendthe overheard X i uncodedly (as if sending [ X i + 0] ). Weterm this operation D EGENERATE -XOR-1. One can see fromFig. 1(b) that D
EGENERATE -XOR-1 takes a vr-packet from Q { } as input. If d receives it, the vr-packet will leave thevr-network. D EGENERATE -XOR-2 is the symmetric versionof D
EGENERATE -XOR-1.We use the following example to illustrate the sub-optimality of the above 5 operations. Suppose s has an X packet for d and a Y packet for d and consider a durationof 2 time slots. Also suppose that s knows beforehand thatthe time-varying channel will have (i) ~p = (0 , . , . , forslot 1; and (ii) ~p = (0 , , , for slot 2. The goal is to transmitas many packets in 2 time slots as possible. Solution 1: INC based on the 5 operations in Fig. 1(b).
In the beginning of time 1, both Q { } and Q { } are empty.Therefore, we can only choose either N ON -C ODING -1 orN ON -C ODING -2. Since the setting is symmetric, without lossof generality we assume that we choose N ON -C ODING -1 andthus send X uncodedly. Since ~p = (0 , . , . , in slot 1,there are only two cases to consider. Case 1: X is receivedonly by d . In this case, we can send Y in the second timeslot, which is guaranteed to arrive at d since ~p = (0 , , , inslot 2. The total sum rate is sending 2 packets ( X and Y ) in 2time slots. Case 2: X is received only by d . In this case, Q { } contains one packet X , and Q ∅ contains one packet Y , and allthe other queues in Fig. 1(b) are empty. We can thus chooseeither N ON -C ODING -2 or D
EGENERATE -XOR-1 for slot 2.Regardless of which coding operation we choose, slot 2 willthen deliver 1 packet to either d or d , depending on the INCoperation we choose. Since no packet is delivered in slot 1, thetotal sum rate is 1 packet in 2 time slots. Since both cases have probability 0.5, the expected sum rate is · . · . . packets in 2 time slots. An optimal solution:
We can achieve strictly better through-put by introducing new INC operations. Specifically, in slot 1,we send the linear sum [ X + Y ] even though neither X nor Y has ever been transmitted , a distinct departure from theexisting 5-operation-based solutions.Again consider two cases: Case 1: [ X + Y ] is received onlyby d . In this case, we let s send Y uncodedly in slot 2. Since ~p = (0 , , , in slot 2, the packet Y will be received by both d and d . d is happy since it has now received the desired Y packet. d can use Y together with the [ X + Y ] packet receivedin slot 1 to decode its desired X packet. Therefore, we deliver2 packets ( X and Y ) in 2 time slots. Case 2: [ X + Y ] isreceived only by d . In this case, we let s send X uncodedlyin slot 2. By the symmetric argument of Case 1, we deliver 2packets ( X and Y ) in 2 time slots. As a result, the sum-rate ofthe new solution is 2 packets in 2 slots, a 33% improvementover the existing solution. Remark:
This example focuses on a 2-time-slot durationdue to the simplicity of the analysis. It is worth noting thatthe throughput improvement persists even for infinitely manytime slots. See the simulations results in Section VI.
2) [12], [15] also derive the block-code capacity region:
We summarize the high-level description of [15]:
Proposition 1: [Propositions 1 and 3, [15]] For the block-code setting, a rate vector ( R , R ) can be achieved if andonly if the corresponding linear programming (LP) problemis feasible. Given any ( R , R ) , the LP problem of interestinvolves · | CQ | + 7 non-negative variables and | CQ | + 16 (in-)equalities and can be explicitly computed.Our goal is to design a dynamic INC scheme, of whichthe stability region matches the block-code capacity region inProposition 1. C. Stochastic Processing Networks (SPNs)
The main tool that we use to stabilize the vr-network isscheduling for the stochastic processing networks (SPNs). Inthe following, we will discuss the basic definitions and existingresults on SPNs.
1) The Main Feature of SPNs:
The SPN is a generalizationof the store-and-forward networks. In an SPN, a packetcan not be transmitted directly from one queue to anotherqueue through links. Instead, it must first be processed by aunit called “Service Activity” (SA). The SA first collects acertain amount of packets from one or more queues (namedthe input queues ), jointly processes/consumes these packets,generates a new set of packets, and finally redistributes themto another set of queues (named the output queues ). Thenumber of consumed packets may be different than the numberof generated packets. There is one critical rule for an SPN:
An SA can be activated only when all its input queues canprovide enough amount of packets for the SA to process.
Thisrule captures directly the INC behavior and thus makes INC ACK is critical in this scheme. I.e., s needs to know whether it is d or d who has received [ X + Y ] in slot 1 before deciding whether to send Y or X in slot 2. a natural application of SPNs. Other applications of SPNsinclude the video streaming problem [16] and the Map-&-Reduce scheduling problem [17].
2) SPNs with Deterministic Departure:
All the existingSPN scheduling solutions [13], [14] assume a special classof SPNs, which we call SPNs with deterministic departure.We elaborate the detailed definition in the following.Consider a time-slotted system with i.i.d. channel quality cq ( t ) . An SPN consists of three components: the input activ-ities (IAs), the service activities (SAs), and the queues. Wesuppose that there are K queues, M IAs, and N SAs in theSPN.
Input Activities:
Each IA represents a session (or a flow) ofpackets. Specifically, each IA injects a deterministic numberof packets to a deterministic set of queues when activated.That is, when an IA m is activated, it it injects a deterministicnumber of α k,m packets to queue k for a group of different k . Let A ∈ R K ∗ M be the “input matrix” with the ( k, m ) -th entry equals to α k,m , for all m and k . At each time t , arandom subset of IAs will be activated. Equivalently, we define a ( t ) ∆ = ( a ( t ) , a ( t ) , · · · , a M ( t )) ∈ { , } M as the random“arrival vector” at time t . If a m ( t ) = 1 , then IA m is activatedat time t . We assume that the random vector a ( t ) is i.i.d. overtime with the average rate vector R = E { a ( t ) } . In our setting,the A matrix is a fixed (deterministic) system parameter andall the randomness of IAs lies in a ( t ) . Service Activities:
For each service activity SA n , we definethe input queues of SA n as the queues which are required toprovide specified amounts of packets when SA n is activated.Let I n denote the collection of the input queues of SA n .Similarly, we define the output queues of SA n as the queueswhich will possibly receive packets when SA n is activated,and let O n be the collection of the output queues of SA n .That is, when SA n is activated, it takes packets from queuesin I n , and sends packets to queues in O n . We assume that cq ( t ) does not change I n and O n .Let β in k,n ( c ) be the number of packets from queue k ∈ I n that will be consumed by SA n if SA n is activated underchannel quality cq ( t ) = c . Specifically, β in k,n ( c ) ≥ if queue k is the input queue of SA n (i.e. k ∈ I n ), and we set β in k,n ( c ) = 0 otherwise. Similarly, let β out k,n ( c ) be the numberof packets received by queue k if SA n is activated underchannel quality cq ( t ) = c . Specifically, β out k,n ( c ) ≥ if queue k ∈ O n , and β out k,n ( c ) = 0 otherwise. Let B in ( c ) ∈ R K ∗ N be the input service matrix under channel quality c with the ( k, n ) -entry equals to β in k,n ( c ) , and let B out ( c ) ∈ R K ∗ N be the output service matrix under channel quality c with the ( k, n ) -entry equals to β out k,n ( c ) . For simplicity, we sometimes write B in and B out instead of B in ( c ) and B out ( c ) . In the deterministicSPN setting, the matrices B in ( c ) and B out ( c ) are deterministic.The only random part is the arrival vector a ( t ) and the channelquality cq ( t ) .At the beginning of each time t , the SPN scheduler is madeaware of the current channel quality cq ( t ) and can choose to“activate” a subset of the SAs. Let x ( t ) ∈ { , } N be the“service vector” at time t . If the n -th coordinate x n ( t ) =1 , then it implies that we choose to activate SA n at time t . Note that for some applications we may need to impose the condition that some of the SAs cannot be scheduled inthe same time slot. To model this interference constraint , werequire x ( t ) to be chosen from a pre-defined set of binaryvectors X . Define Λ to be the convex hull of X and let Λ ◦ bethe interior of Λ . Acyclicness of The Underlying SPN:
The input/outuputqueues I n and O n of the SAs can be used to plot thecorresponding SPN. We assume that the SPN is acyclic. Existing results on the stability region of deterministicSPNs:
Recall that f c is the relative frequency of cq ( t ) = c and all the vectors are row vectors. We then have the followingproposition. Definition 1:
For the deterministic SPNs, an arrival ratevector R is “feasible” if there exist s c ∈ Λ for all c ∈ CQ such that A · R T + X c ∈ CQ f c · B out ( c ) · s T c = X c ∈ CQ f c · B in ( c ) · s T c (1)where ( ~v ) T is the transpose of the row vector ~v . A rate vector R is “strictly feasible” if there exist s c ∈ Λ ◦ for all c ∈ CQ such that (1) holds.Eq. (1) can be viewed as a flow conservation law of thedeterministic SPN, for which the left-hand side describes thepackets injected to queues 1 to k and the right-hand sidecorresponds to the packets leaving the queues. Proposition 2: [A combination of [13], [14]] For deter-ministic SPNs, only feasible R can possibly be stabilized.Moreover, there exists an SPN scheduler that can stabilize all R that are strictly feasible.The achievability part for SPNs with deterministic departure(Proposition 2) is proven by the Deficit Max-Weight (DMW)algorithm in [13] and by the Perturb Max-Weight (PMW)algorithm in [14]. In the following, we briefly explain theexisting DMW algorithm [13].
3) The Deficit Maximum Weight (DMW) Scheduling:
In theDMW algorithm [13] for SPNs with deterministic departure,each queue k maintains a real-valued counter q k ( t ) , calledthe virtual queue length. Initially, q k (1) is set to 0. Forcomparison, the actual queue length is denoted by Q k ( t ) instead.The key feature of a DMW algorithm is that it makes a back-pressure-based decision based on the virtual queue-lengths, noton the actual queue lengths. Specifically, for each time t , wecompute the “preferred service vector” by x ∗ ( t ) = arg max x ∈ X d T ( t ) · x , (2)where d ( t ) is the back pressure vector defined as d ( t ) = (cid:0) B in ( cq ( t )) − B out ( cq ( t ) (cid:1) T q ( t ) , and q ( t ) is the vector ofthe virtual queue lengths. After computing the preferred SAvector x ∗ ( t ) , we update q ( t ) according to the following flowconservation law: q ( t + 1) = q ( t ) + A · a ( t )+ (cid:0) B out ( cq ( t )) − B in ( cq ( t )) (cid:1) · x ∗ ( t ) . (3) As we can see later, sometimes we may not be able to execute/schedulethe preferred service activities chosen by (2). This is the reason why we onlycall the x ∗ ( t ) vector in (2) a preferred choice, instead of a scheduling choice. Fig. 4. An SPN with random departure. Unlike the actual queue lengths Q k ( t ) , which is always ≥ , the virtual queue length q ( t ) can be smaller than 0 whenupdated via (3). That is, we do not need to take the projectionto positive numbers when computing q ( t ) .It is worth emphasizing that the actual queue length still hasto follow the SPN rule. That is, suppose SA n is the preferredservice activity according to (2) but for at least one of its inputqueues, say queue k , the actual queue length Q k ( t ) is smallerthan β in k,n ( cq ( t )) , the number of packets that are supposed toleave queue k . According to the model of SPN, we cannotschedule the preferred SA n due to the lack of enough packetsin queue k . When this scenario happens, DMW simply skipsactivating SA n for this particular time slot, the system remainsidle, and the actual queue length Q k ( t + 1) = Q k ( t ) . On theother hand, even though the system stays idle, the virtual queuelength q ( t ) is still updated by (3). The above DMW algorithmis used to prove Propisition 2 in [13].
4) Open Problems for SPNs with Random Departure:
Although the SPN with deterministic departure is relativelywell understood, those SPN scheduling results cannot beapplied to the INC vr-network. The reason is as follows. Whena packet is broadcast by the base station, it can arrive at a ran-dom subset of receivers with certain probability distributions.Therefore, the vr-packets move among the vr-queues accordingto some probability distribution. This is not compatible withthe deterministic departure SPN model, in which when an SAis activated we know deterministically β in k,n ( c ) and β out k,n ( c ) ,the service rates when the channel quality is cq ( t ) = c . Wecall the SPN model that allows random β in k,n ( c ) and β out k,n ( c ) the SPN with random departure.SPNs with random departure provide a unique challengefor the scheduling design. [13] provides the following exampleillustrating this issue. Fig. 4 describes an SPN with 6 transitionedges. We assume IA1 is activated at every time slot and α , = β in , = β in , = β in , = 1 . Namely, for every time t , α , = 1 packet will enter Q ; in every time slot if weactivate SA1, β in , = 1 packet will leave Q ; if we activateSA2, β in , = 1 packet will leave Q and β in , = 1 packet willleave Q . We assume these 4 transitions are deterministic butthe two transitions SA1 → Q and SA1 → Q are random.Specifically, we assume that there are two possible valuesof the pair ( β out , , β out , ) : ( β out , , β out , ) = (1 , with probability . and ( β out , , β out , ) = (0 , with probability . . That is,whenever SA1 is activated, it takes a packet from Q , andwith probability . this packet goes to Q . Otherwise, thispacket goes to Q . The random departure of SA1 implies thatthe queue length difference | Q |−| Q | forms a binary randomwalk. Note that SA2 has no impact on | Q | − | Q | since italways takes 1 packet from each of the queues. The analysis ofthe random walk shows that | Q | − | Q | goes unbounded withrate √ t . And hence there is no scheduling algorithm which Fig. 5. The virtual network of the proposed new INC solution. can stabilize both | Q | and | Q | simultaneously even thoughthis example satisfies the flow-conservation law in (1) in thesense of expectation.III. T HE P ROPOSED N EW INC S
OLUTION
In Section II, we discuss the limitations of the existingworks on the INC block code design and on the schedulersfor SPNs, separately. In this section, we describe our new low-complexity binary INC scheme that achieves the block codecapacity. In Section IV, we present our new scheduler designfor the SPN with random departure. In Section V, we willcombine the proposed solutions to form the optimal dynamicINC design, see Fig. 2. For the new block code design in thissection, we first describe the encoding steps and then discussthe decoding steps and buffer management.
A. Encoding
The proposed new INC solution is described as follows.We build upon the existing 5 operations, N ON -C ODING -1, N ON -C ODING -2, C
LASSIC -XOR, D
EGENERATE -XOR-1,and D
EGENERATE -XOR-2. See Fig. 1(b) and the discussionin Sections I and II-B1. In addition, we add 2 more operations,termed P
REMIXING and R
EACTIVE -C ODING , respectively,and 1 new virtual queue, termed Q mix . We plot the vr-networkof the new scheme in Fig. 5. From Fig. 5, we can clearlysee that P REMIXING involves both Q ∅ and Q ∅ as input andoutputs to Q mix . R EACTIVE -C ODING involves Q mix as inputand outputs to Q { } or Q { } or simply lets the vr-packet leavethe vr-network (described by the dotted arrow). For every timeinstant, we can choose one of the 7 operations and the goalis to stabilize the vr-network. In the following, we describein details how these two INC operations work and how tointegrate them with the other 5 operations. Our descriptioncontains 4 parts. Part I:
The two operations, N ON -C ODING -1 and N ON -C ODING -2, remain the same.
Part II:
We now describe the new operation P
REMIXING .We can choose P
REMIXING only if both Q ∅ and Q ∅ are non-empty. Namely, there are X i packets and Y j packets that havenot been heard by any of d and d . Whenever we scheduleP REMIXING , we choose one X i from Q ∅ and one Y j from Q ∅ and send [ X i + Y j ] . If neither d nor d receives it, both X i and Y j remain in their original queues.If at least one of { d , d } receives it, we do the following.We remove both X i and Y j from their individual queues. Weinsert a tuple ( rcpt ; X i , Y j ) into Q mix . That is, unlike the other TABLE IA
SUMMARY OF THE R EACTIVE -C ODING OPERATION queues for which each entry is a single vr-packet, each entryof Q mix is a tuple.The first coordinate of ( rcpt ; X i , Y j ) is rcpt , the receptionstatus of [ X i + Y j ] . For example, if [ X i + Y j ] was received by d but not by d , then we set/record rcpt = d d ; If [ X i + Y j ] was received by both d and d , then rcpt = d d . The secondand third coordinates store the participating packets X i and Y j separately. The reason why we do not store the linear sumdirectly is due to the new R EACTIVE -C ODING operation.
Part III:
We now describe the new operation R
EACTIVE -C ODING . For any time t , we can choose R EACTIVE -C ODING only if there is at least one tuple ( rcpt ; X i , Y j ) in Q mix . Chooseone tuple from Q mix and denote it by ( rcpt ∗ ; X ∗ i , Y ∗ j ) . We nowdescribe the encoding part of R EACTIVE -C ODING .Whenever we schedule R
EACTIVE -C ODING , if rcpt ∗ = d d , send Y ∗ j . If rcpt ∗ = d d , send X ∗ i . If rcpt ∗ = d d ,send X ∗ i . One can see that the coding operation depends on thereception status rcpt ∗ when [ X ∗ i + Y ∗ j ] was first transmitted.This is why it is named R EACTIVE -C ODING .The movement of the vr-packets depends on the currentreception status of time t , denoted by rcpt ( t ) , and also onthe old reception status rcpt ∗ when the sum [ X ∗ i + Y ∗ j ] was originally transmitted. The detailed movement rules aredescribed in Table I. The way to interpret the table is asfollows. For example, when rcpt ( t ) = d d , i.e., neither d nor d receives the current transmission, then we do nothing,i.e., keep the tuple inside Q mix . On the other hand, we removethe tuple from Q mix whenever rcpt ( t ) ∈ { d d , d d , d d } . If rcpt ( t ) = d d , then we remove the tuple but do not insert anyvr-packet back to the vr-network, see the second last row ofTable I. The tuple essentially leaves the vr-network in this case.If rcpt ( t ) = d d and rcpt ∗ = d d , then we remove the tuplefrom Q mix and insert Y ∗ j to Q { } . The rest of the combinationscan be read from Table I in the same way. One can verifythat the optimal INC example introduced in Section II-B1 is adirect application of the P REMIXING and R
EACTIVE -C ODING operations.Before we continue describing the slight modification toC
LASSIC -XOR, D
EGENERATE -XOR-1, and D
EGENERATE -XOR-2, we briefly explain why the combination of P RE - MIXING and R
EACTIVE -C ODING works. To facilitate discus-sion, we call the time slot in which we use P
REMIXING to transmit [ X ∗ i + Y ∗ j ] “slot 1” and the time slot in whichwe use R EACTIVE -C ODING “slot 2,” even though the codingoperations P
REMIXING and R
EACTIVE -C ODING may not bescheduled in two adjacent time slots. Using this notation, if rcpt ∗ = d d and rcpt ( t ) = d d , then it means that d receives [ X ∗ i + Y ∗ j ] and Y ∗ j in slots 1 and 2, respectivelyand d receives Y ∗ j in slot 2. In this case, d can decode thedesired X ∗ i and d directly receives the desired Y ∗ j . We nowconsider the perspective of the vr-network. Table I shows thatthe tuple will be removed from Q mix and leave the vr-network.Therefore, no queue in the vr-network stores any of X ∗ i and Y ∗ j . This correctly reflects the fact that both X ∗ i and Y ∗ j havebeen received by their intended destinations.Another example is when rcpt ∗ = d d and rcpt ( t ) = d d .In this case, d receives [ X ∗ i + Y ∗ j ] in slot 1 and d receives X ∗ i in slot 2. From the vr-network’s perspective, the movementrule (see Table I) removes the tuple from Q mix and insertan X ∗ i packet to Q { } . Since a vr-packet is removed froma session-1 queue Q mix and inserted to a session-2 queue Q { } , the total number of vr-packets in the session-1 queuedecreases by 1. This correctly reflects the fact that d hasreceived 1 desired packet X ∗ i in slot 2.An astute reader may wonder why in this example we canput X ∗ i , a session-1 packet, into a session-2 queue Q { } . Thereason is that whenever d receives X ∗ i in the future, it canrecover its desired Y ∗ j by subtracting X ∗ i from the linear sum [ X ∗ i + Y ∗ j ] it received in slot 1 (recall that rcpt ∗ = d d .)Therefore, X ∗ i is now information-equivalent to Y ∗ j , a session-2 packet. Moreover, d has received X ∗ i . Therefore, in termsof the information it carries, X ∗ i is no different than a session-2 packet that has been overheard by d . As a result, it is fitto put X ∗ i in Q { } . Part IV:
We now describe the slight modification toC
LASSIC -XOR, D
EGENERATE -XOR-1, and D
EGENERATE -XOR-2. A unique feature of the new scheme is that somepackets in Q { } may be an X ∗ i packet that is inserted byR EACTIVE -C ODING when rcpt ∗ = d d and rcpt ( t ) = d d .(Also some Q { } packets may be Y ∗ j .) However, in ourprevious discussion, we have shown that those X ∗ i in Q { } is information-equivalent to a Y ∗ j packet overheard by d .Therefore, in the C LASSIC -XOR operation, we should notinsist on sending [ X i + Y j ] but can also send [ P + P ] as longas P is from Q { } and P is from Q { } . The same relaxationmust be applied to D EGENERATE -XOR-1 and D
EGENERATE -XOR-2 operations. Other than this slight relaxation, the threeoperations work in the same way as previously described inSections I and II-B1.As will be seen in Proposition 5 of Section V, the twonew operations P
REMIXING and R
EACTIVE -C ODING allowus to achieve the linear block-code capacity for any time-varying channels. We conclude this section by listing inTable II the transition probabilities of half of the edges ofthe vr-network of Fig. 5. For example, when we scheduleP
REMIXING , we remove a packet from Q ∅ if at least one of { d , d } receives it. As a result, the transition probability alongthe Q ∅ → P REMIXING edge is p d ∨ d ∆ = p d d + p d d + p d d .All the other transition probabilities in Table II can be derivedsimilarly. The transition probability of the other half of theedges can be derived by symmetry. Q mix is regarded as both a session-1 and a session-2 queue simultaneously. TABLE IIA SUMMARY OF THE TRANSITION PROBABILITY OF THE VIRTUALNETWORK IN F IG . 5, WHERE p d ∨ d ∆ = p d d + p d d + p d d ; p d ∆ = p d d + p d d ; NC1 STANDS FOR N ON -C ODING -1; CX
STANDSFOR C LASSIC -XOR; DX1
STANDS FOR D EGENERATE -XOR-1; PM
STANDS FOR P REMIXING ; RC
STANDS FOR R EACTIVE -C ODING . Edge Trans. Prob. Edge Trans. Prob. Q ∅ → NC1 p d ∨ d Q ∅ → PM p d ∨ d NC1 → Q { } p d d PM → Q mix p d ∨ d Q { } → DX1 p d Q mix → RC p d ∨ d Q { } → CX p d RC → Q { } p d d B. Decoding and Buffer Management at Receivers
It is worth emphasizing that the vr-network is a conceptualtool used by the source s to decide what to transmit in eachtime slot. As a result, for the encoding purposes s only needsto store in its memory/buffer all the packets that currentlyparticipate in the vr-network. This automatically implies thatas long as the queues in the vr-network are stabilized, theactual memory usage at the source is also stabilized. However,for the -to- access point network to be stable, one needs toensure that the memory usage for the two receivers is stabilizedas well. In this subsection we discuss the decoding operationsand the memory usage at the receivers.It is clear that each receiver needs to store some packets forthe decoding purposes. A very commonly used assumption inthe Shannon-capacity literature is to assume that the receiversstore all the overheard packets in order to decode the possibleXORed packets sent from the source. No packets will ever beremoved from the buffer under such a policy. Obviously, suchan infinite-buffer scheme is highly impractical.In the existing INC scheduling works [3], [4], [8], [9],another commonly used buffer management scheme is thefollowing. For any time t , define i ∗ (resp. j ∗ ) as the smallest i (resp. j ) such that d (resp. d ) has not decoded X i (resp. Y j )in the end of time t . Then each receiver can simply removeany X i and Y j in the buffer for those i < i ∗ and j < j ∗ .The reason is that those X i and Y j has already been knownby their intended receivers, will not participate in any futuretransmission, and thus can be removed from the receive bufferwithout any impact to future decoding.On the other hand, under such a buffer management scheme,the receivers may use significantly more memory than that ofthe source, which was observed in our numerical experiments.The reason is as follows. Suppose d has decoded X , X , X ,..., X , and X and suppose d has decoded Y to Y and Y to Y . In this case i ∗ = 2 and j ∗ = 5 . The aforementionedscheme will keep all X to X in the buffer of d and all Y to Y in the buffer of d . But it turns out that the sourceis interested in only sending 3 more packets X , X , and Y .This apparent waste of memory is due to the fact that having3 more packets to send does not mean that we only need tostore X , X and Y in the buffer of the receivers . For thedecoding purposes, we need to store extra “overheard” packetsthat can facilitate decoding in the future. But on the other hand,the above buffer management scheme is too conservative and very inefficient since it does not trace the actual overhearingstatus of each packet and only use the simplest i ∗ and j ∗ pairto decide whether to prune the packets in the buffers of thereceivers.In contrast with the above buffer management scheme usedin [3], [4], [8], [9], our vr-network scheme admits the fol-lowing efficient decoding operations and buffer managementsolution. In the following, we describe the decoding andbuffer management at d . The operations at d can be donesymmetrically. Our description consists of two parts. We firstdescribe how to perform decoding at d and which packetsneed to be stored in d ’s buffer, while assuming that anypackets that have been stored in the buffer will never beexpunged. In the second part, we describe how to prune thememory usage without affecting the decoding operations. Upon d receiving a packet: Case 1: If the received packetis generated by N ON -C ODING -1, then such a packet must be X i for some i . We thus pass such an X i to the upper layer;Case 2: If the received packet is generated by N ON -C ODING -2, then such a packet must be Y j for some j . We store Y j inthe buffer of d ; Case 3: If the received packet is generatedby P REMIXING , then such a packet must be [ X i + Y j ] . Westore the linear sum [ X i + Y j ] in the buffer. Case 4: If thereceived packet is generated by R EACTIVE C ODING , thensuch a packet can be either X ∗ i or Y ∗ j , see Table I for detaileddescriptions of R EACTIVE -C ODING .We have two sub-cases in this scenario. Case 4.1: If thepacket is X ∗ i , we pass such an X ∗ i to the upper layer. Then d examines whether it has stored [ X ∗ i + Y ∗ j ] in its buffer.If so, use X ∗ i to decode Y ∗ j and insert Y ∗ j to the buffer. Ifnot, store a separate copy of X ∗ i in the buffer even thoughone copy of X ∗ i has already been passed to the upper layer.Case 4.2: If the packet is Y ∗ j , then by Table I, it is clearthat d must have received the linear sum [ X ∗ i + Y ∗ j ] in thecorresponding P REMIXING operation in the past. Therefore, [ X ∗ i + Y ∗ j ] must be in the buffer of d already. We can thususe Y ∗ j and [ X ∗ i + Y ∗ j ] to decode the desired X ∗ i . Receiver d then passes the decoded X ∗ i to the upper layer and stores Y ∗ j in its buffer.Case 5: If the received packet is generated by D EGENERATE
XOR-1, then such a packet can be either X i or Y j , where Y j are those packets in Q { } but coming from R EACTIVE C ODING , see Fig. 5. Case 5.1: If the packet is X i , we passsuch an X i to the upper layer. Case 5.2: If the packet is Y j ,then from Table I, it must be corresponding to the intersectionof the row of rcpt = d d and the column of rcpt ∗ = d d .As a result, d must have received the corresponding [ X i + Y j ] in the P REMIXING operation. By Case 3, the linear sum hasbeen stored in the buffer, and d can thus use the received Y j to decode the desired X i . After decoding, X i is passed to theupper layer.Case 6: the received packet is generated by D EGENERATE
XOR-2. Consider two subcases. Case 6.1: the received packetis X i . It is clear from Fig. 5 that such X i must come fromR EACTIVE -C ODING since any packet from Q ∅ to Q { } mustbe a Y j packet. By Table I and the row corresponding to rcpt = d d , any X i ∈ Q { } that came from R EACTIVE -C ODING must correspond to the column of rcpt ∗ = d d . By the second half of Case 4.1, such X i ∈ Q { } must be in the buffer of d .As a result, d can simply ignore any X i packet it receivesfrom D EGENERATE
XOR-2. Case 6.2: the received packet is Y j . By the discussion of Case 2, if the Y j ∈ Q { } came fromN ON -C ODING -2, then it must be in the buffer of d already.As a result, d can simply ignore those Y j packets. If the Y j ∈ Q { } came from R EACTIVE -C ODING , then by Table Iand the row corresponding to rcpt = d d , those Y j ∈ Q { } must correspond to the column of either rcpt ∗ = d d or rcpt ∗ = d d . By the first half of Case 4.1 and by Case 4.2,such Y j ∈ Q { } must be in the buffer of d already. Again, d can simply ignore those Y j packets. From the discussionof Cases 6.1 and 6.2, any packet generated by D EGENERATE
XOR-2 is already known to d , and nothing needs to be donein this case. Case 7: the received packet is generated by C
LASSIC -XOR.Since we have shown in Case 6 that any packet in Q { } isalready known to d , receiver d can simply subtract the Q { } packet from the linear sum received in Case 7. As a result,from d ’s perspective, it is no different than directly receivinga Q { } packet, i.e., Case 5. As a result, d will repeat thedecoding operation and buffer management in the same wayas in Case 5. Periodically pruning the memory:
In the above discussion,we elaborate which packets d should store in its buffer andhow to use them for decoding, while assuming no packet willever be removed from the buffer. In the following, we discusshow to remove packets from the buffer of d .We first notice that by the discussion of Cases 1 to 7, theuncoded packets in the buffer of d , i.e., those of the form ofeither X i or Y j , are used for decoding only in the scenarioof Case 7 . Namely, they are used to remove the Q { } packetparticipating in the linear sum of C LASSIC -XOR. As a result,periodically we let the source s send to d the list of allpackets in Q { } of the vr-network. After receiving the list, d simply removes from its buffer any uncoded packets X i and/or Y j that are no longer in Q { } .We then notice that by the discussion of Cases 1 to 7, thelinear sum [ X i + Y j ] in the buffer of d is only used in oneof the following two scenarios: (i) To decode Y j in Case 4.1or to decode X i in Case 4.2; and (ii) To decode X i in Case5.2. As a result, the [ X i + Y j ] in the buffer is “useful” onlyif one of the following two conditions are satisfied: (a) Thecorresponding tuple ( rcpt , X i , Y j ) is still in the Q mix of thevr-network, which corresponds to the scenarios of Cases 4.1and 4.2; and (b) If the participating Y j is still in the Q { } ofthe vr-network. By the above observation, periodically we letthe source s send to d the list of all packets in Q { } and Q mix The discussion of Cases 5 and 6 echoes our arguments in the end of[Section ?? : Encoding] that any packet in Q { } (which can be either X i or Y j ) is information-equivalent to a session-2 packet that has been overheardby d . Only the packet IDs are sent, not the payload. Therefore the overhead ofsending the list is small. Moreover, we only need to send the “incrementalchanges” of the list and d can update the list by itself. In this way, theoverhead of sending the list can be made negligible. of the vr-network. After receiving the list, d simply removesfrom its buffer any linear sum [ X i + Y j ] that satisfies neither(a) nor (b).The above pruning mechanism ensures that only the packetsuseful for future decoding are kept in the buffer of d and d .Furthermore, it also leads to the following lemma. Lemma 1:
Assume the lists of packets in Q { } , Q { } , and Q mix are sent to d after every time slot. The number of packetsin the buffer of d is upper bounded by | Q { } | + | Q { } | + | Q mix | . Proof:
From our discussion, the total number of uncodedpackets X i or Y j in the buffer of d is upper bounded by | Q { } | . Also, the total number of linear sum [ X i + Y j ] in thebuffer of d is upper bounded by | Q mix | plus the number of Y j packets in Q { } , which is further bounded by | Q mix | + | Q { } | .As a result, the total number of packets in the buffer of d isupper bounded by | Q { } | + | Q { } | + | Q mix | . (cid:4) Lemma 1 implies that as long as the queues in the vr-network are stabilized, the actual memory usage at both thesource and the destinations can be stabilized simultaneously.Moreover, the combined memory usage of the source and 2receivers will be upper bounded by Q ∅ + Q ∅ + 3 | Q { } | +3 | Q { } | + 3 | Q mix | in the vr-network. Remark:
In addition to efficient decoding and buffer man-agement, we notice that in the proposed INC scheme, only thebinary XOR is used and each transmitted packet is either anuncoded packet or a linear sum of two packets. Therefore,during transmission we only need to store 1 or 2 packetsequence numbers in the header of the uncoded/coded packet,depending on whether we send an uncoded packet or a linearsum. As a result, the communication overhead of the proposedscheme is very small.IV. T HE P ROPOSED S CHEDULING S OLUTION
In this section, we first formalize the model of SPNswith random departure and then we propose a new schemethat achieves the optimal throughput region for SPNs withrandom departure. We conclude this section by providing thecorresponding stability/throughput analysis.
A. A Simple SPN model with Random Departure
Although our solution applies to general SPNs with randomdeparture, for illustration purposes we describe our schemeby focusing on a simple SPN model with random departure,which we termed the (0,1) random SPN. The (0,1) randomSPN includes the INC vr-network in Section III as a specialexample and is thus sufficient for our discussion.Recall the definitions in Section II-C2 for SPNs withdeterministic departure (we use deterministic SPNs as short-hand). The differences between the (0,1) random SPN and thedeterministic SPN are:Difference 1: In a deterministic SPN, SA n can be activatedonly if for all k in the input queues I n , queue k has at least β in k,n number of packets in the queue. For comparison, in a(0,1) random SPN, SA n can be activated only if for all k ∈ One can see that both d and d need to receive the lists of packets in Q { } , Q { } , and Q mix . Therefore, s can simply broadcast (the changes) ofthe three lists to both d and d . I n , queue k has at least 1 packet in the queue. For easierfuture reference, we say SA n is feasible at time t if at time t queue k has at least 1 packet for all k ∈ I n . Otherwise, wesay SA n is infeasible at time t .Difference 2: In a deterministic SPN, when SA n is activatedwith the channel quality c , exactly β in k,n ( c ) number of packetswill leave queue k for all k ∈ I n . In a (0,1) random SPN,when SA n is activated with the channel quality c (assumingSA n is feasible), the number of packets leaving queue k isa binary random variable, β in k,n ( c ) , with mean β in k,n ( c ) for all k ∈ I n . Namely, with probability β in k,n ( c ) , 1 packet will leavequeue k and with probability − β in k,n ( c ) no packet will leavequeue k . Since the packet consumption is Bernoulli, in a (0,1)random SPN, it is possible that an SA consumes zero packeteven after being activated. However, since we do not knowhow many packets will be consumed beforehand, the (0,1)random SPN imposes that all the input queues have at least 1packet before we can activate an SA, even though when weactually activate the SA, it sometimes consumes zero packet.For comparison, in a deterministic SPN, an SA n is feasibleif all its input queues have at least β in k,n ( cq ( t )) packets andit will always consume exactly β in k,n ( cq ( t )) packets from itsinput queues once activated (see Difference 1).Difference 3: In a (0,1) random SPN, when SA n is activatedwith the channel quality c (assuming SA n is feasible), thenumber of packets entering queue k is a binary randomvariable with mean β out k,n ( c ) for all k ∈ O n .We also use the following 3 technical assumptions for the(0,1) random SPN: Assumption 1: Given any channel quality c ∈ CQ , both the input and output service matrix B in and B out are independently distributed over time. Assumption 2: Eachvector in the set of possible service vectors X can have atmost 1 non-zero coordinate. Namely, we can activate at mostone service activity (out of totally N SAs) at any given time.Assumption 3: For any cq ( t ) , the expectation of β in k,n ( cq ( t )) (resp. β out k,n ( cq ( t )) ) with k ∈ I n (resp. k ∈ O n ) is alwaysstrictly in (0 , . Namely, we do not consider the limitingcase in which the Bernoulli random variables are always 0.Assumption 1 is related to the practical scenarios. Assumptions2 and 3 are for rigorously proving the stability region.One can easily verify that the three INC vr-networks inFigs. 1(a), 1(b), and 5 are special examples of the (0,1) randomSPN and they satisfy the 3 technical assumptions as well. B. The Proposed Scheduler For (0,1) Random SPNs
Similar to the DMW algorithm, each queue k maintains areal-valued counter q k ( t ) , the virtual queue length. Initially, q k (1) is set to 0. For any time t , the realization of eachentry in the input and output service matrices B in and B out takes values in either 0 or 1 since we are focusing on a (0,1)random SPN. We compute B in ( cq ( t )) ∆ = E ( B in | cq ( t )) and B out ( cq ( t )) ∆ = E ( B out | cq ( t )) , the expected input and outputservice matrices, respectively, when the channel quality is cq ( t ) . The entries of B in ( cq ( t )) and B out ( cq ( t )) are denotedby β in k,n ( cq ( t )) and β out k,n ( cq ( t )) , respectively. Obviously, bydefinition, the expected input and output service rates are non-negative numbers. For each time t , we choose the preferred service vector by the back-pressure decision rule (2) exceptfor that the back-pressure vector d ( t ) is now computed by d ( t ) = (cid:16) B in ( cq ( t )) − B out ( cq ( t )) (cid:17) T q ( t ) . (4)We use the new back-pressure vector d ( t ) plus (2) to find thepreferred SA n ∗ , i.e., all the coordinates of x ∗ are zero exceptfor the n ∗ -th coordinate being one. We then check whetherthe preferred SA n ∗ is feasible. If so, we officially scheduleSA n ∗ . If not, we let the system to be idle, i.e., the actuallyscheduled service vector x ( t ) = is now all-zero.Regardless of whether the preferred SA n ∗ is feasible ornot, we update q ( t ) by q ( t + 1) = q ( t ) + A · a ( t )+ (cid:16) B out ( cq ( t )) − B in ( cq ( t )) (cid:17) · x ∗ ( t ) . (5)Note that q ( t ) can sometimes take negative values since wedo not project q ( t ) to positive reals.In short, we borrow the wisdom of DMW so that we canmake scheduling decisions based on the virtual queue lengths q k ( t ) that can take negative values. But then we update q k ( t ) only by the expected service rates rather than the actual servicerates since we are dealing with a random SPN instead ofa deterministic SPN. For notation simplicity, we denote theproposed scheduler for (0,1) random SPNs by SCH avg . C. Performance Analysis
The example in Section II-C4 shows that one challengeof the SPN with random departure is that Q k ( t ) may growunboundedly (sublinearly) even when the expected flow-conservation law in (1) is satisfied. In this work, we prove thatthe sublinearly growing queues in the example of Section II-C4are actually the worst possible case that could happen. Namely,for SPNs with random departure, we can always find analgorithm such that all queue lengths grow sublinearly whenthe input rates are within the optimal stability region.Note that from a throughput perspective, sublinear growthmeans that the throughput penalty incurred by the growingqueues is negligible since the throughput is the average numberof the packet arrivals per second and only the linear terms mat-ter in the long run. Moreover, for any scheme A that achievessublinearly growing queues, it is likely (without any rigorousproof) that we can convert it to a bounded queue scheme by(i) Run scheme A until any of the sublinearly growing queuelength hits some pre-defined threshold; (ii) Stop scheme A andrun a naive scheme B that focuses on “draining” the queuesof the network; (iii) When running scheme B , put any newarrival packets into a separate buffer Q ; (iv) After scheme B successfully drains out all the queues, we start to run scheme A again and we inject the packets collected in Q graduallyback to the system. The above 4 steps guarantee that thequeue lengths are bounded. Heuristically, they also approachthe optimal throughput since the queues grow sublinearly, thepenalty of running the “draining-stage scheme B” should also The reason of letting the system idle is to facilitate rigorous stabilityanalysis. In practice, we can choose arbitrarily any other feasible SA at thatmoment. be negligible when choosing a sufficiently large threshold inStep (i).From the above reasonings, we believe that sublinearlygrowing queues are as good as the bounded queues from apractical perspective. The following analysis is based on theconcept of sublinearly growing queue lengths.
Definition 2:
A queue length q ( t ) grows sublinearly if forany ǫ > and δ > , there exists t such that Prob ( | q ( t ) | > ǫt ) < δ, ∀ t > t . (6)Since we assume that the input activities a ( t ) have boundedsupport, an equivalent definition of sublinear growth is: q ( t ) grows sublinearly if for any ρ > there exists t such that E {| q ( t ) |} < ρt, ∀ t > t . (7)An SPN is sublinearly stable if all the queues grow sublinearly. Remark:
As a result of the above definition, one can ob-serve that the summation of finitely many sublinearly-growingqueues is still sublinearly-growing.The following two propositions characterize the sublinearstability region of any (0,1) random SPN. Proposition 3 spec-ifies the outer bound of the stability region, and Proposition 4specifies an inner bound.
Proposition 3:
Consider any (0,1) random SPN. A ratevector R can be sublinearly stabilized only if there exist s c ∈ Λ for all c ∈ CQ such that A · R + X c ∈ CQ f c · B out ( c ) · s c = X c ∈ CQ f c · B in ( c ) · s c . (8)Proposition 3 can be derived by conventional flow conser-vation arguments as in [13] and the proof is thus omitted. Proposition 4:
For any SPN that satisfies the three assump-tions in Section IV-A and any rate vector R , if there exist s c ∈ Λ ◦ for all c ∈ CQ such that (8) holds, then the proposedscheme SCH avg in Section IV-B can sublinearly stabilize theSPN with arrival rate R . Outline of the proof of Proposition 4:
Let each queue k keepanother two real-valued counters q inter k ( t ) and Q inter k ( t ) , termedthe intermediate virtual queue length and intermediate actualqueue length . Recall that q k ( t ) is the virtual queue length and Q k ( t ) is the actual queue length. There are thus 4 differentqueue length values for each queue k . To prove Q ( t ) canbe sublinearly stabilized by SCH avg , we will show that both Q inter k ( t ) and the absolute difference | Q k ( t ) − Q inter k ( t ) | can besublinearly stabilized by SCH avg for all k . Since the summationof sublinearly-growing random processes is still sublinearly-growing, Q ( t ) can be sublinearly stabilized by SCH avg , andwe have thus proven Proposition 4.To that end, we first specify the update rules for q inter k ( t ) and Q inter k ( t ) . Initially, q inter k (1) and Q inter k (1) are set to 0 for all k .In the end of each time t , we compute q inter ( t + 1) using thepreferred schedule x ∗ ( t ) chosen by SCH avg : q inter ( t + 1) = q inter ( t ) + A · a ( t )+ (cid:0) B out ( cq ( t )) − B in ( cq ( t )) (cid:1) · x ∗ ( t ) . (9) q inter k ( t ) and Q inter k ( t ) are used only for the proof and are not neededwhen running the scheduling algorithm. If we compare (9) with the computation of q ( t ) in (5), q inter ( t ) is updated based on the realization of the input and outputservice matrices while q ( t ) is updated based on the expected input and output service matrices. Equivalently, we can rewrite(9) as q inter k ( t + 1) = q inter k ( t ) − µ out ,k ( t ) + µ in ,k ( t ) , ∀ k, (10)where µ out ,k ( t ) = N X n =1 (cid:0) β in k,n ( cq ( t )) · x ∗ n ( t ) (cid:1) , (11) µ in ,k ( t ) = M X m =1 ( α k,m · a m ( t )) + N X n =1 (cid:0) β out k,n ( cq ( t )) · x ∗ n ( t ) (cid:1) . (12)Here, µ out ,k is the amount of packets coming “out of queue k ”,which is decided by the “input rates of SA n ”. Similarly, µ in ,k is the amount of packets “entering queue k ”, which is decidedby the “output rates of SA n ”. We also update Q inter ( t + 1) by Q inter k ( t + 1) = (cid:0) Q inter k ( t ) − µ out ,k ( t ) (cid:1) + + µ in ,k ( t ) , ∀ k, (13)where ( v ) + = max { , v } .The difference between q inter k ( t ) and Q inter k ( t ) is that theformer can be still be strictly negative when updated via (10)while we enforce the latter to be non-negative.To compare Q inter k ( t ) and Q k ( t ) , we observe that by (13), Q inter k ( t ) is purely updated by the preferred service vector x ∗ ( t ) without considering whether the preferred SA n ∗ is feasibleor not (see Difference 1 in Section IV-A). That is, in the casethat SA n ∗ is infeasible, then SA n ∗ cannot be carried outsuccessfully. Therefore, the system remains idle and the actualqueue length Q k ( t + 1) = Q k ( t ) for all k = 1 to K or Q k ( t ) increases if there is external arrival at queue k . In contrast,even though SA n ∗ cannot be carried out successfully, we stillupdate Q inter k ( t +1) by (11) to (13) for all queue k . As a result,the Q inter k ( t ) values will still change for those k ∈ I n ∪ O n .To evaluate the absolute difference | Q k ( t ) − Q inter k ( t ) | , forany time t and any queue k , we first define an event, whichis called the null activity of queue k at time t . Since weassume at any time t , only one SA can be scheduled, weuse n ( t ) to denote the preferred SA suggested by the back-pressure scheduler in (2) and (4). As a result, at time t , wesay the null activity occurs at queue k if (i) k ∈ I n ( t ) and (ii) Q inter k ( t ) < β in k,n ( cq ( t )) . That is, the null activity describes theevent that the preferred SA shall consume the packets in queue k (since k ∈ I n ( t ) ) but the intermediate actual queue length Q inter k ( t ) is less than the realization β in k,n ( cq ( t )) . Note that thenull activity is defined based on the intermediate actual queuelength Q inter k ( t ) and does not distinguish whether the actualqueue length Q k ( t ) is larger or less than 1. Therefore the nullactivities are not directly related to the event that SA n is In the original DMW algorithm for deterministic SPNs [13], the quantity“actual queue length” is updated by (13). The “actual queue lengths in [13]”thus refer to a conceptual register value Q inter k ( t ) rather than the number ofphysical packets in the buffer/queue. In this work, we rectify this inconsistencyby renaming “the actual queue lengths in [13]” the “intermediate actual queuelengths Q inter k ( t ) .” infeasible.Let N NA ,k ( t ) be the aggregate number of null activitiesoccurred at queue k up to time t . Then we can write N NA ,k ( t ) as N NA ,k ( t ) △ = t X τ =1 I ( k ∈ I n ( τ ) ) · I ( Q inter k ( τ ) < β in k,n ( τ ) ( cq ( τ ))) , where I ( · ) is the indicator function.The following lemma upper bounds the difference of Q k ( t ) and Q inter k ( t ) by the aggregate numbers of null activities. Lemma 2:
For all k = 1 , , ..., K , there exist K non-negative coefficients γ , ..., γ K such that E ( | Q k ( t ) − Q inter k ( t ) | ) ≤ K X ˜ k =1 γ ˜ k N NA , ˜ k ( t ) . (14)for all t = 1 to ∞ .The proof of Lemma 2 is relegated to Appendix A. InAppendix D, we prove that both Q inter k ( t ) and N NA ,k ( t ) of(0,1) random SPN can be sublinearly stabilized by SCH avg forall k . Therefore, by Lemma 2, Q inter k ( t ) and | Q k ( t ) − Q inter k ( t ) | can be sublinearly stabilized and so can Q k ( t ) . Proposition 4is thus proven.V. T HE C OMBINED S OLUTION
We are now ready to combine the discussions in Sections IIIand IV. As discussed in Section III, the 7 operations form avr-network as described in Fig. 5 and both the source and thetwo receivers perform encoding and decoding according to thepacket movements in the vr-network, respectively. Specifically,there are K = 5 queues, M = 2 IAs, and N = 7 SAs. The 5-by-2 input matrix A contains 2 ones, since the packets arriveat either Q ∅ or Q ∅ . Given the channel quality cq ( t ) = c , theexpected input and output service matrices B in ( c ) and B out ( c ) can be derived from Table II.We use the following concrete example to illustrate ourprocedure. Suppose that the channel quality cq ( t ) is Bernoulliwith parameter / (i.e., flipping a perfect coin). Also sup-pose that when cq ( t ) = 0 , with probability . (resp. . )destination d (resp. destination d ) can successfully receivea packet transmitted by source s ; and when cq ( t ) = 1 , withprobability / (resp. / ) destination d (resp. destination d ) can successfully receive a packet transmitted by source s . Further assume that all the success events of d and d are independent. Please also see Appendix E for furtherdetails on the matrix construction. If we order the 5 queuesas h Q ∅ , Q ∅ , Q { } , Q { } , Q mix i , the 7 service activities as The DMW algorithm for SPNs were first introduced in [13]. However, inthat paper, the authors rename the intermediate actual queue lengths definedin (13) of this paper as the actual queue length and prove that Q inter k ( t ) can be stabilized for deterministic SPNs. However, proving Q inter k ( t ) can bestabilized does not necessarily mean that Q k ( t ) can be stabilized, as discussedin the paragraphs after (13). The critical part of proving | Q k ( t ) − Q inter k ( t ) | isstabilized is unfortunately missing in [13]. One contribution of this work is toprovide in Lemma 2 the first rigorous proof showing that | Q k ( t ) − Q inter k ( t ) | can be stabilized as well. [ NC1 , NC2 , DX1 , DX2 , PM , RC , CX ] , then the matrices of theSPN become A = (cid:20) (cid:21) T , B in (0) = .
85 0 0 0 0 .
85 0 00 0 .
85 0 0 0 .
85 0 00 0 0 . .
50 0 0 0 . .
70 0 0 0 0 0 .
85 0 , B in (1) = / / / / / /
30 0 0 1 / /
30 0 0 0 0 7 / , B out (0) = .
35 0 0 0 0 0 .
35 00 0 .
15 0 0 0 0 .
15 00 0 0 0 0 .
85 0 0 , B out (1) = / / / / / , s c = h x [ c ] NC1 x [ c ] NC2 x [ c ] DX1 x [ c ] DX2 x [ c ] PM x [ c ] RC x [ c ] CX i T . For example, the seventh column of B in (0) indicates thatwhen cq ( t ) = 0 and the C LASSIC -XOR is activated, withprobability 0.5 (resp. 0.7) 1 packet will be consumed fromqueue Q { } (resp. Q { } ). The third row of B out (1) indicatesthat when cq ( t ) = 1 , queue Q { } will increase by 1 withprobability 1/9 (resp. 1/9) if coding choice N ON -C ODING -1(resp. R
EACTIVE -C ODING ) is activated since it corresponds tothe event that d receives the transmitted packet but d doesnot.Since there are 7 coding operations (SAs), each vector in X is a 7-dimensional binary vector. Since we are allowed tochoose any one of the 7 operations or choose to transmitnothing, 7 of the 8 vectors are the Dirac delta vectors andthe rest is an all-zero vector. We can now use the proposedDMW scheduler in (2), (4), and (5) to compute the preferredscheduling decision. We activate the preferred decision if it isfeasible. If not, then the system remains idle.For general channel parameters (including but not limitedto this simple example), after computing the B in ( c ) and B out ( c ) of the vr-network in Fig. 5 with the help of Table II,we can explicitly compare the sublinear stability region inPropositions 3 and 4 with the Shannon capacity region in [15].In the end, we have the following proposition. Proposition 5:
The sublinear stability region of the pro-posed INC-plus-SPN-scheduling scheme matches the block-code capacity of time-varying channels.The detailed proof of Proposition 5 is provided in Appendix E.
Remark:
During numerical simulations, we notice that wecan further revise the proposed scheme to reduce the actualqueue lengths Q k ( t ) by ≈ even though we do not have any rigorous proofs/performance guarantees for the revisedscheme. That is, when making the scheduling decision by (2),we can compute d ( t ) by d ( t ) = (cid:16) B in ( cq ( t )) − B out ( cq ( t )) (cid:17) T q inter ( t ) . (15)where q inter ( t ) is the intermediate virtual queue length definedin (10) of Section IV-C. The intuition behind is that the newback-pressure in (15) allows the scheme to directly control q inter k ( t ) , which, when compared to the virtual queue q ( t ) in(5), is more closely related to the actual queue length Q k ( t ) . A. Extensions For Rate Adaption
We close this section by noting that the proposed solutioncan be naturally extended to the case of rate adaptation,which is also known as adaptive coding and modulation.For illustration purposes, we consider the following simpleexample of adaptive coding and modulation scheme.Consider 2 possible error correcting rates (1/2 and 3/4); 2possible modulation schemes QPSK and 16QAM; and jointlythere are 4 possible combinations. The lowest throughputcombination is rate-1/2 plus QPSK and the highest throughputcombination is rate-3/4 plus 16QAM. Assuming the packetsize is fixed. If the highest throughput combination takes 1-unit time to finish sending 1 packet, then the lowest through-put combination will take 3-unit time. For these 4 possible(rate,modulation) combinations, we denote the unit-time tofinish transmitting 1 packet as T to T , respectively.For the i -th (rate,modulation) combination, i = 1 to ,source s can measure the probability that d and/or d suc-cessfully hears the transmission, and denote the correspondingprobability vector by ~p ( i ) . Source s then uses ~p ( i ) to computethe B in , ( i ) ( cq ( t )) and B out , ( i ) ( cq ( t )) for the vr network. Thenit computes the backpressure by d ( i ) ( t ) = (cid:16) B in , ( i ) ( cq ( t )) − B out , ( i ) ( cq ( t )) (cid:17) T q ( t ) . We can now compute the preferred scheduling choice by arg max i ∈{ , , , } , x ∈ X d ( i ) ( t ) T · x T i (16)and update the virtual queue length q ( t ) by (5). Namely, thebackpressure d ( i ) ( t ) T · x is scaled inverse proportionally withrespect to T i , the time it takes to finish the transmission of 1packet. If the preferred SA n ∗ is feasible, then we use the i -th(rate,modulation) combination plus the coding choice n ∗ forthe current transmission. If the preferred SA n ∗ is infeasible,then we either choose another (rate,modulation) combinationplus coding choice arbitrarily or simply let the system idle.One can see that the new scheduler (16) automaticallybalances the packet reception status (the q ( t ) terms), the suc-cess overhearing probability of different (rate,modulation) (the B in , ( i ) ( cq ( t )) and B out , ( i ) ( cq ( t )) terms), and different amountof time it takes to finish transmission of a coded/uncodedpacket (the T i term). In all the numerical experiments of There are four types of queue lengths in this work: q ( t ) , q inter ( t ) , Q inter ( t ) , and Q ( t ) and they range from the most artificially-derived q ( t ) to the most realistic metric, the actual queue length Q ( t ) . Sum Rate A v e r a g e A gg r e g a t e d Q u e u e L e ng t h Modified 7−OP INCOptimal 7−OP INCExisting 5−OP INC [9]Back−Pressure Routing
Fig. 6. The backlog of four different schemes for a time-varying channelwith cq ( t ) uniformly distributed on { , } , and the packet delivery probabilitybeing ~p = (0 , . , . , if cq ( t ) = 1 and ~p = (0 , , , if cq ( t ) = 2 . Section VI, the new scheduler (16) robustly achieves theoptimal throughput with adaptive coding and modulation.VI. S
IMULATION R ESULTS
In this section, we simulate the proposed optimal 7-operation INC + scheduling solution and compare the resultswith the existing INC solutions and the (back-pressure) pure-routing solutions.In Fig. 6, we simulate a simple time-varying channelsituation first described in Section II-B1. Specifically, thechannel quality cq ( t ) is i.i.d. distributed and for any t , cq ( t ) isuniformly distributed on { , } . When cq ( t ) = 1 , the successprobabilities are ~p (1) = (0 , . , . , and when cq ( t ) = 2 ,the success probabilities are ~p (2) = (0 , , , , respectively.We consider four different schemes: (i) Back-pressure (BP) +pure routing; (ii) BP + INC with 5 operations [9], [18]; (iii)The proposed DMW+INC with 7 operations, and (iv) Themodified DMW+INC with 7 operations that use q inter k ( t ) tocompute the back pressure, see (15), instead of q k ( t ) in (4).We choose perfectly fair ( R , R ) = ( θ, θ ) and graduallyincrease the θ value and plot the stability region. For eachexperiment, i.e., each θ , we run the schemes for timeslots. The horizontal axis is the sum rate R + R = 2 θ andthe vertical axis is the aggregate backlog (averaged over 10trials) in the end of slots. By the results in [15], the sumrate Shannon capacity is 1 packet/slot, the best possible rate for5-OP INC is 0.875 packet/slot, and the best pure routing rateis 0.75 packet/slot, which are plotted as vertical lines in Fig. 6.The simulation results confirm our analysis. The proposed 7-operation dynamic INC has a stability region matching theShannon block code capacity and provides . throughputimprovement over the 5-operation INC, and . over thepure-routing solution.Also, both our original proposed solution (using q k ( t ) ) andthe modified solution (using q inter k ( t ) ) can approach the stabilityregion while the modified solution has smaller backlog. Thisphenomenon is observed throughout all our experiments. As Sum Rate A v e r a g e A gg r e g a t e d Q u e u e L e ng t h (a) ( f , f , f , f ) = (0 . , . , . , . . Sum Rate A v e r a g e A gg r e g a t e d Q u e u e L e ng t h (b) ( f , f , f , f ) = (0 . , . , . , . .Fig. 7. The backlog comparison with cq ( t ) chosen from { , , , } and ~p (1) = (0 . , . , . , . , ~p (2) = (0 . , . , . , . , ~p (3) =(0 . , . , . , . , and ~p (4) = (0 . , . , . , . . a result, in the following experiments, we only report theresults of the modified solution using q inter k ( t ) to compute thebackpressure.Next we simulate the scenario of 4 different channelqualities: CQ = { , , , } . The varying channel qual-ities could model the situations like the different packettransmission rates and loss rates due to time-varying in-terference caused by the primary traffic in a cognitive ra-dio environment. We assume four possible channel quali-ties with the corresponding probability distributions being ~p (1) = ( p (1) d d , p (1) d d , p (1) d d , p (1) d d ) = (0 . , . , . , . , ~p (2) = (0 . , . , . , . , ~p (3) = (0 . , . , . , . ,and ~p (4) = (0 . , . , . , . in both Figs. 7(a) and 7(b).The difference is that in Fig. 7(a), the channel quality cq ( t ) isi.i.d. distributed with probability (frequency) ( f , f , f , f ) being (0 . , . , . , . . In Fig. 7(b) the cq ( t ) isagain i.i.d. but with different frequency ( f , f , f , f ) =(0 . , . , . , . . Again, we assume perfect fairness Sum Rate A v e r a g e A gg r e g a t e d Q u e u e L e ng t h Proposed INC + rate adaptionAggressive 5−OP INCConservative 5−OP INCRouting + rate adaption
Fig. 8. The backlog of four different schemes for rate adaptation withtwo possible (error-correcting-code rate,modulation) combinations. The back-pressure-based INC scheme in [9] is used in both aggressive and conser-vative 5-OP INC, where the former always chooses the high-throughput(rate,modulation) combination while the latter always chooses the low-throughput (rate,modulation) combination. ( R , R ) = ( θ, θ ) and gradually increase the θ value. Thesum-rate Shannon block-code capacity is R + R = 0 . when ( f , f , f , f ) = (0 . , . , . , . and R + R =0 . when ( f , f , f , f ) = (0 . , . , . , . , andthe pure routing sum-rate capacity is R + R = 0 . when ( f , f , f , f ) = (0 . , . , . , . and R + R = 0 . when ( f , f , f , f ) = (0 . , . , . , . . We simulateour modified 7-OP INC, the priority-based solution in [4], anda standard back-pressure routing scheme [11]. Each point ofthe curves is the average of 10 trials and each trial lasts for slots.Although the priority-based scheduling solution is provablyoptimal for fixed channel quality, it is less robust and cansometimes be substantially suboptimal (see Fig. 7(b)) due tothe ad-hoc nature of the priority-based policy. For example,as depicted by Figs. 7(a) and 7(b), the pure-routing solutionoutperforms the 5-operation scheme for one set of frequency ( f , f , f , f ) while the order is reversed for another setof frequency. On the other hand, the proposed 7-operationscheme consistently outperforms all the existing solutionsand has a stabiliby region matching the Shannon block-codecapacity. We have tried many other combinations of time-varying channels. In all our simulations, the proposed DMWscheme always achieves the block-code capacity in [15] andoutperforms routing and any existing solutions [4], [9].Our solution in Section V-A is the first dynamic INC designthat is guaranteed to achieve the optimal linear INC capacitywith rate-adaptation (adaptive coding and modulation) [15].Fig. 8 compares its performance with existing routing-basedrate-adaptation scheme and the existing INC schemes, the lat-ter of which are designed without rate adaptation. We assumethere are two available (error-correcting-code rate,modulation)combinations to be selected. We assume that the first combi-nation takes 1 second to finish transmitting a single packetand the second combination takes 1/3 second to finish a single packet. That is, the transmission rate of the secondcombination is 3 times faster than the first combination.We further assume the packet delivery probability is ~p =(0 . · . , . · . , . · . , . · . if the first combinationis selected and ~p = (0 . · . , . · . , . · . , . · . if the second combination is selected. That is, the low-throughput combination is likely to be overheard by bothdestinations and the high-throughput combination has a muchlower success transmission probability. We can compute thecorresponding Shannon block-code capacity region by modify-ing the equations in [15]. We then use the proportional fairnessobjective function ξ ( R , R ) = log( R ) + log( R ) and findthe maximizing R ∗ and R ∗ over the Shannon capacity region,which are R ∗ = 0 . packets per second and R ∗ = 0 . packets per second in this example.After computing the optimal block code capacity, weassume the following dynamic packet arrivals. We define ( R , R ) = θ · ( R ∗ , R ∗ ) for any given θ ∈ (0 , . For anyexperiment (i.e., for any given θ ), the arrivals of session- i packets is a Poisson random process with rate R i packets persecond for i = 1 , . That is, if the low-throughput combination1 is selected to transmit 1 packet, then during the 1 second ittakes to finish, the number of arrivals of session- i packetsis a Possion random variable with mean R i · packets.Similarly, if the high-throughput combination is selected totransmit 1 packet, then during the 1/3 second it takes to finishtransmission, the number of arrivals of session- i packets is aPossion random variable with mean R i · / packets.Each point of the curves of Fig. 8 consists of 10 trials andeach trial lasts for seconds. We compare the performanceof our scheme in Section V-A with (i) Pure-routing with rate-adaptation; (ii) aggressive 5-OP INC, i.e., use the scheme in[9] and always choose combination 2; and (iii) conservative5-OP INC, i.e., use the scheme in [9] and always choosecombination 1. We also plot the optimal routing-based rate-adaptation rate and the optimal Shannon-block-code capacityrate as vertical lines.We can observe that since our proposed scheme jointlydecides which (rate,modulation) combination to use and whichINC operation to encode the packet in an optimal way, see(16), the stability region of our scheme matches the block-codeShannon capacity with rate-adaptation. It provides . throughput improvement over the pure routing-based rate-adaptation solution (which is represented by the red dash linein Fig. 8).Furthermore, we observe that if we perform INC but al-ways choose the low-throughput (rate,modulation), as sug-gested in some existing works [19], then the largest sum-rate R + R = θ ∗ cnsv. 5-OP ( R ∗ + R ∗ ) = 0 . , which isworse than pure routing with rate-adaptation θ ∗ routing,RA ( R ∗ + R ∗ ) = 1 . . Even if we always choose the high-throughput(rate,modulation) with 5-OP INC, then the largest sum-rate R + R = θ ∗ aggr. 5-OP ( R ∗ + R ∗ ) = 0 . is even worse thanthe conservative 5-OP INC capacity. We have tried many otherrate-adaptation scenarios. In all our simulations, the proposedDMW scheme always achieves the block-code capacity andoutperforms pure-routing, conservative 5-OP INC, and aggres-sive 5-OP INC. It is worth emphasizing that in our simulation, for anyfixed (rate,modulation) combination, the channel quality isalso fixed. Therefore since 5-OP scheme is throughput op-timal for fixed channel quality [10], it is guaranteed thatthe 5-OP scheme is throughput optimal when using a fixed(rate,modulation) combination. Our results thus show thatusing a fixed (rate,modulation) combination is the main reasonof the suboptimal performance and the proposed scheme in (2),(5), and (16) can dynamically decide which (rate,modulation)combination to use for each transmission and achieve thelargest possible stability region.VII. C
ONCLUSION
We have proposed a new 7-operation INC scheme togetherwith the corresponding scheduling algorithm to achieve theoptimal downlink throughput of the 2-flow access point net-work with time varying channels. Based on binary XORoperations, the proposed solution admits ultra-low encod-ing/decoding complexity with efficient buffer management andminimal communication and control overhead. The proposedalgorithm has also been generalized for rate adaptation andit again robustly achieves the optimal throughput in all thenumerical experiments. The proposed algorithm has also beengeneralized for rate adaptation and it again robustly achievesthe optimal throughput in all the numerical experiments. Abyproduct of this paper is a throughput-optimal schedulingsolution for SPNs with random departure, which could furtherbroaden the applications of SPNs to other real-world applica-tions. A
PPENDIX
A. Proof of Lemma 2
Recall that we assume the SPN under consideration isacyclic, and hence we could arrange the queues from theupstream to the downstream and index them from (the mostupstream) to K (the most dowsntream). Recall that we use n ( t ) to denote the preferred SA chosen by the back-pressurescheduler. The proof of Lemma 2 consists of proving thefollowing lemmas.We first prove the following lemma. Lemma 3:
For any k = 1 , , ..., K and any time t , | Q k ( t ) − Q inter k ( t ) |≤ t − X τ =1 I ( ∃ k ′ ∈ I n ( τ ) : Q k ′ ( τ ) = 0) . (17) Lemma 4:
For any k = 1 , , ..., K and any time τ , I ( k ∈ I n ( τ ) ) ≤ I ( k ∈ I n ( τ ) ) I ( Q inter k ( τ ) < β in k,n ( τ ) ( cq ( τ )))+ I ( k ∈ I n ( τ ) ) I ( Q k ( τ ) = 0) · I ( β in k,n ( τ ) ( cq ( τ )) = 0) I (1 ≤ Q inter k ( τ ))+ I ( k ∈ I n ( τ ) ) I ( Q k ( τ ) = 0) · I ( β in k,n ( τ ) ( cq ( τ )) = 0) I (0 ≤ Q inter k ( τ ) < I ( k ∈ I n ( τ ) ) I ( Q k ( τ ) = 0) · I ( β in k,n ( τ ) ( cq ( τ )) = 1) I (1 ≤ Q inter k ( τ )) . (18) By combining Lemma 3 and Lemma 4 and by the unionbound, we can upper bound the value of | Q k ( t ) − Q inter k ( t ) | asfollows. | Q k ( t ) − Q inter k ( t ) |≤ K X k ′ =1 t − X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ′ ( τ ) < β in k ′ ,n ( τ ) ( cq ( τ )))+ K X k ′ =1 t − X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q k ′ ( τ ) = 0) · I ( β in k ′ ,n ( τ ) ( cq ( τ )) = 0) I (1 ≤ Q inter k ′ ( τ ))+ K X k ′ =1 t − X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q k ′ ( τ ) = 0) · I ( β in k ′ ,n ( τ ) ( cq ( τ )) = 0) I (0 ≤ Q inter k ′ ( τ ) < K X k ′ =1 t − X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q k ′ ( τ ) = 0) · I ( β in k ′ ,n ( τ ) ( cq ( τ )) = 1) I (1 ≤ Q inter k ′ ( τ )) . (19)Recall that the goal of Lemma 2 is to upper bound theexpectation of | Q k ( t ) − Q inter k ( t ) | as a weighted sum of E { N NA ,k ′ ( t ) } . We then observe that the expectation of the firstterm of the RHS of (19) is indeed the sum of E { N NA ,k ′ ( t ) } .Therefore, to complete the proof of Lemma 2, we only need toupper bound the expectation of the second to the fourth termsof the RHS of (19) by some weighted sum of E { N NA ,k ′ ( t ) } .The following Lemmas 5 to 7 upper bound the expectation ofthe second to the fourth terms, respectively. Lemma 5:
For any k = 1 , ..., K , there exists a constant γ k such that E ( t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q k ( τ ) = 0) · I ( β in k,n ( τ ) ( cq ( τ )) = 0) I (1 ≤ Q inter k ( τ ))) ≤ γ k E ( t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q k ( τ ) = 0) · I ( β in k,n ( τ ) ( cq ( τ )) = 1) I (1 ≤ Q inter k ( τ )) (20)for all t = 1 to ∞ .Namely, the expectation of the second term of the RHSof (19) is upper bounded by γ k times the expectation of thefourth term of the RHS of (19). Lemma 6:
For any k = 1 , ..., K, there exists a constant γ k such that E ( t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q k ( τ ) = 0) · I ( β in k,n ( τ ) ( cq ( τ )) = 0) I (0 ≤ Q inter k ( τ ) < ≤ γ k E ( t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q inter k ( τ ) < β in k,n ( τ ) ( cq ( τ ))) for all t = 1 to ∞ .Lemma 7 upper bounds the expectation of the fourth termof (19), which is also used to upper bound the second term of(19) through Lemma 5. Lemma 7:
For any k value, there exists γ to γ k − suchthat E ( t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q inter k ( τ ) ≥ · I ( β in k,n ( τ ) ( cq ( τ )) = 1) I ( Q k ( τ ) = 0)) ≤ k − X k ′ =1 γ k ′ E ( t X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ′ ( τ ) < β in k ′ ,n ( τ ) ( cq ( τ )))) (21)for all t = 1 to ∞ .Finally by applying Lemma 5 and Lemma 7 to the secondterm of the RHS of (19), applying Lemma 6 to the third termof the RHS of (19), and applying Lemma 7 to the fourth termof the RHS of (19), we have proven the following statement:for any k , there exist γ to γ K such that E ( | Q k ( t ) − Q inter k ( t ) | ) ≤ K X k ′ =1 γ k ′ E ( t − X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ′ ( τ ) < β in k ′ ,n ( τ ) ( cq ( τ )))) for all t = 1 to ∞ . The proof of Lemma 3 to Lemma 7 arerelegated to Appedix B. Lemma 2 is thus proven. (cid:4) B. Proofs of Lemma 3 to Lemma 7Proof of Lemma 3:
Before proving Lemma 3, we firstrewrite the LHS of (17) as | Q k ( t ) − Q inter k ( t ) | = t − X τ =1 (cid:0) | Q k ( τ + 1) − Q inter k ( τ + 1) | − | Q k ( τ ) − Q inter k ( τ ) | (cid:1) . So to prove (17), it suffices to show that the followinginequality holds for all k and τ < t . (cid:0) | Q k ( τ + 1) − Q inter k ( τ + 1) | − | Q k ( τ ) − Q inter k ( τ ) | (cid:1) ≤ I ( ∃ k ′ ∈ I n ( τ ) : Q k ′ ( τ ) = 0) . (22)We now prove (22). By set relationship, one can easilyverify that one and only one of the following 3 possible casesis true at each time τ .1) k
6∈ I n ( τ ) ∪ O n ( τ ) .2) k ∈ I n ( τ ) ∪ O n ( τ ) and SA n ( τ ) is feasible.3) k ∈ I n ( τ ) ∪ O n ( τ ) and SA n ( τ ) is not feasible.In the case of 1), the LHS of (22) at time τ is zerosince Q k ( τ + 1) − Q k ( τ ) = Q inter k ( τ + 1) − Q inter k ( τ ) = P Mm =1 α k,m a m ( τ ) . Inequality (22) thus holds obviously.In the case of 2), the scheduled SA n ( τ ) is feasible. Suppose k ∈ O n ( τ ) . Then the LHS of (22) at time τ is always 0since Q k ( τ + 1) − Q k ( τ ) = Q inter k ( τ + 1) − Q inter k ( τ ) = β out k,n ( τ ) ( cq ( τ ))+ P Mm =1 α k,m a m ( τ ) . Suppose k ∈ I n ( τ ) . Then Q k ( τ + 1) = Q k ( τ ) − β in k,n ( τ ) ( cq ( τ )) + P Mm =1 α k,m a m ( τ ) .There are now two sub-cases: Q inter k ( τ ) ≥ Q k ( τ ) ≥ or Q inter k ( τ ) < Q k ( τ ) . (The case that Q k ( τ ) = 0 is notpossible since we now consider the scenario SA n ( τ ) isfeasible.) In the first sub-case, since Q inter k ( τ ) ≥ Q k ( τ ) ≥ and since SA n ( τ ) is feasible, we must have Q inter k ( τ + 1) − Q inter k ( τ ) = Q k ( τ + 1) − Q k ( τ ) = − β in k,n ( τ ) ( cq ( τ )) + P Mm =1 α k,m a m ( τ ) . As a result, the LHS of (22) at time τ isagain 0. In the second sub-case, Q inter k ( τ + 1) = ( Q inter k ( τ ) − β in k,n ( τ ) ( cq ( τ ))) + + P Mm =1 α k,m a m ( τ ) , and Q k ( τ + 1) = Q inter k ( τ ) − β in k,n ( τ ) ( cq ( τ )) + P Mm =1 α k,m a m ( τ ) since in thiscase we assume SA n ( τ ) is feasible and thus Q k ( τ ) ≥ .Recall that Q inter k ( τ ) < Q k ( τ ) in this sub-case. Therefore, ( Q inter k ( τ ) − β in k,n ( τ ) ( cq ( τ ))) + ≤ ( Q k ( τ ) − β in k,n ( τ ) ( cq ( τ ))) + =( Q k ( τ ) − β in k,n ( τ ) ( cq ( τ ))) . We thus have Q inter k ( τ + 1) ≤ Q k ( τ + 1) . As a result, the LHS of (22) at time τ becomes ( | Q k ( τ + 1) − Q inter k ( τ + 1) | − | Q k ( τ ) − Q inter k ( τ ) | )=( Q k ( τ + 1) − Q k ( τ ) + Q inter k ( τ ) − Q inter k ( τ + 1))= − β in k,n ( τ ) ( cq ( τ ))+ ( Q inter k ( τ ) − max { Q inter k ( τ ) − β in k,n ( τ ) ( cq ( τ )) , } )= − β in k,n ( τ ) ( cq ( τ )) + min { β in k,n ( τ ) ( cq ( τ )) , Q inter k ( τ ) }≤ . Since the RHS of (22) is always non-negative, (22) holds inthe case of 2).In the case of 3), the preferred SA n ( τ ) is not feasible.Without loss of generality, we assume that there is no externalarrival at queue k in time τ since any external arrival willchange Q k ( τ ) and Q inter k ( τ ) by the same amount. Since SA n ( τ ) is not feasible, we have Q k ( τ + 1) = Q k ( τ ) . On theother hand, Q inter k ( τ ) might still increase or decrease at mostby 1 since the update rule of Q inter k ( τ ) (13) does not dependon whether SA n ( τ ) is feasible or not. Since Q inter k ( τ ) changesby at most 1, the LHS of (22) at time τ is upper bounded by1 in this case while the RHS of (22) is always 1 since SA n ( τ ) is no feasible. Thus (22) holds in the case of 3).In summary, for all k and τ < t , (22) holds in all 3 possiblecases. Lemma 3 is proven. (cid:4) Proof of Lemma 4:
Suppose k ∈ I n ( τ ) . We claim that oneand only one of the following 4 possible cases is true:1) Q inter k ( τ ) < β in k,n ( τ ) ( cq ( τ )) .2) Q k ( τ ) = 0 , β in k,n ( τ ) ( cq ( τ )) = 0 , and ≤ Q inter k ( τ ) .3) Q k ( τ ) = 0 , β in k,n ( τ ) ( cq ( τ )) = 0 , and ≤ Q inter k ( τ ) < .4) Q k ( τ ) = 0 , β in k,n ( τ ) ( cq ( τ )) = 1 , and ≤ Q inter k ( τ ) .The reason is as follows. For any fixed k , we either have Q inter k ( τ ) < β in k,n ( τ ) ( cq ( τ )) ; or Q k ( τ ) = 0 and Q inter k ( τ ) ≥ β in k,n ( τ ) ( cq ( τ )) . In the former scenario, we have 1). In thelatter scenario, we can further partition the event based onthe values of β in k,n ( τ ) ( cq ( τ )) and Q inter k ( τ ) and we thus have2) to 4). The four cases correspond to the four terms in theRHS of (18). The proof of Lemma 4 is complete. (cid:4) Proof of Lemma 5:
Obviously we have E ( t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q k ( τ ) = 0) · I ( β in k,n ( τ ) ( cq ( τ )) = 0) I (1 ≤ Q inter k ( τ ))) ≤ E ( t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q k ( τ ) = 0) · I (1 ≤ Q inter k ( τ )) . We then observe that β in k,n ( τ ) ( cq ( τ )) , the channel realizationfrom queue k to SA n ( τ ) during time τ , is independent of n ( τ ) , Q k ( τ ) , and Q inter k ( τ ) , which depend only on the historyfrom time 1 to ( τ − , not on the realization of β in k,n ( τ ) ( cq ( τ )) in time τ .Furthermore, recall that β in k,n ( τ ) ( cq ( τ )) is a Bernoullirandom variable with E ( I ( β in k,n ( τ ) ( cq ( τ )) = 1)) = β in k,n ( τ ) ( cq ( τ )) . Define γ k = 1min c ∈ CQ ,n ∈ [1 ,N ] ,k ∈I n β in k,n ( c ) , which always exists since we assume min c ∈ CQ ,n ∈ [1 ,N ] ,k ∈I n β in k,n ( c ) > in the SPN of interest(Assumption 3). Since whether β in k,n ( cq ( τ )) = 1 isindependent of n ( τ ) , Q k ( τ ) , and Q inter k ( τ ) , we have E ( t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q k ( τ ) = 0) · I (1 ≤ Q inter k ( τ ))) ≤ γ k E ( t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q k ( τ ) = 0) · I ( β in k,n ( τ ) ( cq ( τ )) = 1) I (1 ≤ Q inter k ( τ )) , which completes the proof of Lemma 5. (cid:4) Proof of Lemma 6:
We have E ( t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q k ( τ ) = 0) · I ( β in k,n ( τ ) ( cq ( τ )) = 0) I (0 ≤ Q inter k ( τ ) < ≤ γ k E ( t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q k ( τ ) = 0) · I ( β in k,n ( τ ) ( cq ( τ )) = 1) I (0 ≤ Q inter k ( τ ) < (23) ≤ γ k E ( t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q inter k ( τ ) < β in k,n ( τ ) ( cq ( τ ))) , (24)where (23) follows from the same argument as used inthe proof of Lemma 5; (24) follows from the fact that if β in k,n ( τ ) ( cq ( τ )) = 1 and ≤ Q inter k ( τ ) < , then Q inter k ( τ ) <β in k,n ( τ ) ( cq ( τ )) . Thus Lemma 6 is proven. (cid:4) Proof of Lemma 7:
Define ∆ Q k ( τ ) △ = Q inter k ( τ ) − Q k ( τ ) . Wefirst state the following four claims and use these claims toprove Lemma 7. The proof of these four claims are relegatedto Appendix C. Claim 1:
For the most upstream queue ( k = 1 ) we have Q ( τ ) ≥ Q inter ( τ ) for all τ . Claim 2:
For any k = 1 , , ..., K and any time t , we have t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q inter k ( τ ) ≥ · I ( β in k,n ( τ ) ( cq ( τ )) = 1) I ( Q k ( τ ) = 0) ≤ t X τ =1 I ( k ∈ O n ( τ ) ) I (∆ Q k ( τ + 1) > ∆ Q k ( τ )) . (25) Claim 3:
For any τ = 1 to ∞ , we have I (∆ Q k ( τ + 1) > ∆ Q k ( τ )) I ( k ∈ O n ( τ ) ) ≤ k − X k ′ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ′ ( τ ) < β in k ′ ,n ( τ ) ( cq ( τ )))+ k − X k ′ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q k ′ ( τ ) = 0) · I ( β in k ′ ,n ( τ ) ( cq ( τ ))) = 0) I (1 ≤ Q inter k ′ ( τ ))+ k − X k ′ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q k ′ ( τ ) = 0) · I ( β in k ′ ,n ( τ ) ( cq ( τ ))) = 0) I (0 ≤ Q inter k ′ ( τ ) < k − X k ′ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q k ′ ( τ ) = 0) · I ( β in k ′ ,n ( τ ) ( cq ( τ ))) = 1) I (1 ≤ Q inter k ′ ( τ )) . (26) Claim 4:
For any k = 1 , , ..., K and any time t , we have t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q inter k ( τ ) ≥ · I ( β in k,n ( τ ) ( cq ( τ )) = 1) I ( Q k ( τ ) = 0) ≤ k − X k ′ =1 t X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ( τ ) < β in k ′ ,n ( τ ) ( cq ( τ )))+ k − X k ′ =1 t X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ′ ( τ ) ≥ · I ( β in k ′ ,n ( τ ) ( cq ( τ )) = 1) I ( Q k ′ ( τ ) = 0)+ k − X k ′ =1 t X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ′ ( τ ) ≥ · I ( β in k ′ ,n ( τ ) ( cq ( τ )) = 0) I ( Q k ′ ( τ ) = 0)+ k − X k ′ =1 t X τ =1 I ( k ′ ∈ I n ( τ ) ) I (1 > Q inter k ′ ( τ ) ≥ · I ( β in k ′ ,n ( τ ) ( cq ( τ )) = 0) I ( Q k ′ ( τ ) = 0) . (27)With the above four claims, we are now ready to proveLemma 7. We prove Lemma 7 by induction on the value of k . Consider the case of k = 1 first. By Claim 1, we have Q ( τ ) ≥ Q inter ( τ ) for all τ . Therefore, whenever Q ( τ ) = 0 ,we must have Q inter ( τ ) = 0 . As a result, the LHS of (21) isalways 0 for k = 1 . Lemma 7 thus holds for k = 1 .Now consider general k . By Lemma 4 and taking the expectation on both sides, we have E ( t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q inter k ( τ ) ≥ · I ( β in k,n ( τ ) ( cq ( τ )) = 1) I ( Q k ( τ ) = 0)) ≤ k − X k ′ =1 E ( t X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ( τ ) < β in k ′ ,n ( τ ) ( cq ( τ ))))+ k − X k ′ =1 E ( t X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ′ ( τ ) ≥ · I ( β in k ′ ,n ( τ ) ( cq ( τ )) = 1) I ( Q k ′ ( τ ) = 0))+ k − X k ′ =1 E ( t X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ′ ( τ ) ≥ · I ( β in k ′ ,n ( τ ) ( cq ( τ )) = 0) I ( Q k ′ ( τ ) = 0))+ k − X k ′ =1 E ( t X τ =1 I ( k ′ ∈ I n ( τ ) ) I (1 > Q inter k ′ ( τ ) ≥ · I ( β in k ′ ,n ( τ ) ( cq ( τ )) = 0) I ( Q k ′ ( τ ) = 0)) . (28)We look at the second term of the RHS of (28) first. Noticethat by induction hypothesis, for k ′ = 1 , ..., k − , there exists γ k ′ , to γ k ′ ,k ′ − such that E ( t X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ′ ( τ ) ≥ · I ( β in k,n ( τ ) ( cq ( τ )) = 1) I ( Q k ′ = 0)) ≤ k ′ − X k ′′ =1 γ k ′ ,k ′′ · E ( t X τ =1 I ( k ′′ ∈ I n ( τ ) ) I ( Q inter k ′′ ( τ ) < β in k ′′ ,n ( τ ) ( cq ( τ )))) . The above inequality shows that the second term of the RHSof (28) can be bounded by a weighted sum of E ( P tτ =1 I ( k ′ ∈I n ( τ ) ) I ( Q inter k ′ ( τ ) < β in k ′ ,n ( τ ) ( cq ( τ )))) for k ′ = 1 , ..., k − .We now look at the third term of the RHS of (28). ByLemma 5, for k ′ = 1 , ..., k − , there exists a constant γ ′ k ′ such that E ( t X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ′ ( τ ) ≥ · I ( β in k ′ ,n ( τ ) ( cq ( τ )) = 0) I ( Q k ′ ( τ ) = 0)) ≤ γ ′ k ′ E ( t X τ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ′ ( τ ) ≥ · I ( β in k ′ ,n ( τ ) ( cq ( τ )) = 1) I ( Q k ′ ( τ ) = 0)) . Again by the same argument for the second term of the RHS of(28), the third term of the RHS of (28) can also be boundedby a weighted sum of E ( P tτ =1 I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ′ ( τ ) <β in k ′ ,n ( τ ) ( cq ( τ )))) for k ′ = 1 , ..., k − .By Lemma 6, the fourth term of the RHS of (28) canalso be bounded by a weighted sum of E ( P tτ =1 I ( k ′ ∈I n ( τ ) ) I ( Q inter k ′ ( τ ) < β in k ′ ,n ( τ ) ( cq ( τ )))) for k ′ = 1 , ..., k − .Since all the 4 terms in the RHS of (28) can be upper bounded by a weighted sum of E ( P tτ =1 I ( k ′ ∈I n ( τ ) ) I ( Q inter k ′ ( τ ) < β in k ′ ,n ( τ ) ( cq ( τ )))) for k ′ = 1 , ..., k − ,we have thus proven Lemma 7. (cid:4) C. Proofs of Claims for Lemma 7Proof of Claim 1:
By the definition of Q k ( t ) and Q inter k ( t ) ,we have Q k (1) = 0 = Q inter k (1) . The desired inequality holdswhen τ = 1 . Suppose the inequality holds for some τ . We nowprove that the inequality also holds for τ + 1 . To that end, wefirst notice that any external arrival at time τ will increasethe Q inter ( τ ) and Q ( τ ) by the same amount. Therefore, theexternal arrivals will not affect the order between Q inter ( τ ) and Q ( τ ) and we can thus assume there is no external arrival intime t without loss of generality. Consider the first scenarioin which / ∈ I n ( τ ) . Since / ∈ I n ( τ ) , no packets will leavequeue . Since we have Q inter ( τ ) ≤ Q ( τ ) to begin with, wewill still have Q inter ( τ + 1) ≤ Q ( τ + 1) .Now consider the scenario of ∈ I n ( τ ) and the followingtwo cases: Case 1: Q inter ( τ ) < β in ,n ( τ ) ( cq ( τ )) . In this case,at the beginning of time τ + 1 , Q inter ( τ + 1) = 0 due to theupdate rule (13). Since the actual queue length Q k ( τ ) is non-negative, we must have Q inter ( τ + 1) ≤ Q ( τ + 1) . Case 2: Q inter ( τ ) ≥ β in ,n ( τ ) ( cq ( τ )) . In this case, we have Q inter ( τ +1) = Q inter ( τ ) − β in ,n ( τ ) ( cq ( τ )) (recall that we assume noexternal). We observe that the actual queue length Q eitherdecreases by β in ,n ( τ ) ( cq ( τ )) or remain the same, depending onwhether SA n ( τ ) can be carried out successfully or not (seeDifference 1 in the Section IV-A). Therefore, the decreaseamount of Q inter ( τ ) is no less than the decrease amount of Q ( τ ) , which together with the fact that Q inter ( τ ) ≤ Q ( τ ) imply Q inter ( τ +1) ≤ Q ( τ +1) . By induction, we have proventhat Q ( τ ) ≥ Q inter ( τ ) for all τ . (cid:4) Proof of Claim 2:
Since both Q k ( τ ) and Q inter k ( τ ) areinteger-valued random processes, ∆ Q k ( τ ) is also an integer-valued random process. Furthermore, we observe that thechanges of Q k ( τ ) and Q inter k ( τ ) is always in the same direction.Namely, if Q inter k ( τ ) increases, then it means that k is oneof the output queues of SA n ( τ ) , which means that Q k ( τ ) can either increase or remain the same (the latter is due to thefact that the preferred SA n ( τ ) may be infeasible). Similarly,if Q inter k ( τ ) decreases then Q k ( τ ) can decrease or remain thesame. Since the largest change of Q k (resp. Q inter k ) is at most and they move in the same direction, it can be easily shownthat the change of ∆ Q k ( τ ) is also at most . To simplify theexpression, for the time being, we sometimes ignore the queueindex k in ∆ Q k ( τ ) . That is, we will write ∆ Q k ( τ ) as ∆ Q ( τ ) in the remaining of this proof.In the following, we iteratively define two sequences of timeinstants, { s i : ∀ i } and { t i : ∀ i } . The first time instant is s = 1 . Then for any i , define t i ∈ ( s i , t + 1] as the largesttime instant such that for all time instant ˜ τ ∈ ( s i , t i ) , wehave ∆ Q (˜ τ ) > . Note that for all ˜ τ ∈ ( s i , s i + 1) we have ∆ Q (˜ τ ) > since ( s i , s i + 1) is an empty interval. As a result, t i always exists and is uniquely defined as long as we have For ease of exposition, we do not count the external arrivals since anyexternal arrival will increase Q k and Q inter k by the same amount. s i ≤ t to begin with. Furthermore, since ∆ Q ( τ ) is an integer-valued random process with change at most 1 at each time slot,we can observe that ∆ Q ( t i ) = 0 if t i ≤ t and ∆ Q ( t i ) > if t i = t + 1 . In summary, ∆ Q ( t i ) ≥ .After defining t i , we define s i +1 ∈ [ t i , t ] as the time instantsuch that for all time instant ˜ τ ∈ ( t i , s i +1 ] , we have ∆ Q (˜ τ ) ≤ and ∆ Q ( s i +1 +1) > . This time, such s i +1 may or may notexist. For example, if ∆ Q (˜ τ ) ≤ for all ˜ τ ∈ [ t i , t + 1] , then s i +1 does not exist since even the largest possible choice of s i +1 = t still does not satisfy the requirement ∆ Q ( s i +1 +1) > . However, one may observe that we must have ∆ Q ( s i +1 ) =0 whenever s i +1 exists. The reason is that ∆ Q ( τ ) changes byat most one in any two consecutive time slots. Therefore, thefacts that ∆ Q (˜ τ ) ≤ and ∆ Q ( s i +1 + 1) > jointly imply ∆ Q ( s i +1 ) = 0 . In summary, [ s i , t i ) is the i -th “continuousinterval” such that all τ ∈ ( s i , t i ) satisfy ∆ Q ( τ ) > .Define M s as the number of ( s i , t i ) pairs that do exist. Since s = 1 is clearly defined, we have M s ≥ . We will now arguethat for any i = 1 to M s , we have t i − X τ = s i I (∆ Q ( τ + 1) < ∆ Q ( τ )) ≤ t i − X τ = s i I (∆ Q ( τ + 1) > ∆ Q ( τ )) . (29)To see the correctness of (29), we first observe that ∆ Q ( t i ) = ∆ Q ( s i ) + t i − X τ = s i (∆ Q ( τ + 1) − ∆ Q ( τ )) . Since ∆ Q ( s i ) = 0 and ∆ Q ( t i ) ≥ , we have t i − X τ = s i (∆ Q ( τ + 1) − ∆ Q ( τ )) + ≥ t i − X τ = s i (∆ Q ( τ + 1) − ∆ Q ( τ )) − , where ( v ) + = max { , v } and ( v ) − = max { , − v } . Since ∆ Q ( τ ) moves by at most 1, we thus have (29).Now we turn our focus back to proving Claim 2. We noticethat when I ( k ∈ I n ( τ ) ) I ( Q inter k ( τ ) ≥ · I ( β in k,n ( τ ) ( cq ( τ )) = 1) I ( Q k ( τ ) = 0) = 1 , (30)we have ∆ Q ( τ ) = Q inter k ( τ ) − Q k ( τ ) ≥ . Moreover, we arguethat ∆ Q ( τ + 1) = ∆ Q ( τ ) − . The reason is as follows. Sincequeue k is one of the input queues of the preferred SA at τ andthe queue lengths satisfy Q inter k ( τ ) ≥ and Q k ( τ ) = 0 , Q inter k will decrease by one according to the update rule (13) while Q k remains zero since the preferred SA n ( τ ) is infeasible. Asa result, ∆ Q ( τ + 1) = ∆ Q ( τ ) − . We thus have the following, t X τ =1 I ( k ∈ I n ( τ ) ) I ( Q inter k ( τ ) ≥ · I ( β in k,n ( τ ) ( cq ( τ )) = 1) I ( Q k ( τ ) = 0) ≤ M s X i =1 t i − X τ = s i I (∆ Q ( τ + 1) < ∆ Q ( τ )) (31)The reason is that any τ that satisfies (30) will have ∆ Q ( τ ) ≥ , and by our construction of s i and t i , for all i = 1 to M s ,such τ must fall into one of the intervals [ s i , t i ) . Also, any τ that satisfies (30) will have ∆ Q ( τ + 1) < ∆ Q ( τ ) . As a result,we have (31). We can continue upper bounding (31) by (31) ≤ M s X i =1 t i − X τ = s i I (∆ Q ( τ + 1) > ∆ Q ( τ )) (32) = M s X i =1 t i − X τ = s i I (∆ Q ( τ + 1) > ∆ Q ( τ )) I ( k ∈ O n ( τ ) ) (33) ≤ t X τ =1 I (∆ Q ( τ + 1) > ∆ Q ( τ )) I ( k ∈ O n ( τ ) ) , (34)where (32) follows from (29); and (34) follows from includingadditionally those τ not in any of the interval [ s i , t i ) . Exceptfor proving (33), the proof of Claim 2 is complete.In the remaining part of this proof, we will rigorously prove(33). To that end, we first notice that I (∆ Q ( τ + 1) − ∆ Q ( τ ) > I (∆ Q ( τ + 1) − ∆ Q ( τ ) > · I ( k ∈ O n ( τ ) )+ I (∆ Q ( τ + 1) − ∆ Q ( τ ) > · I ( k
6∈ O n ( τ ) ) (35)In the next paragraph, we will prove that when k
6∈ O n ( τ ) ,we always have either “ ∆ Q ( τ + 1) − ∆ Q ( τ ) ≤ ” or“ ∆ Q ( τ ) < .” It means that the term I (∆ Q ( τ +1) − ∆ Q ( τ ) > · I ( k
6∈ O n ( τ ) ) is either or the τ value is not counted inany of the [ s i , t i ) intervals since by our construction we alwayshave ∆ Q ( s i ) = 0 and any ˜ τ ∈ ( s i , t i ) satisfying ∆ Q (˜ τ ) > .As a result, (33) is true.Consider the situation when k
6∈ O n ( τ ) and considertwo sub-cases: If SA n ( τ ) turns out to be infeasible, then Q k ( τ + 1) = Q k ( τ ) + P Mm =1 α k,m a m ( τ ) . Also, we alwayshave Q inter k ( τ + 1) ≤ Q inter k ( τ ) + P Mm =1 α k,m a m ( τ ) since k
6∈ O n ( τ ) implies that the intermediate queue length Q inter k can only decrease or remain the same (except when there isexternal arrival P Mm =1 α k,m a m ( t ) ). As a result, in this sub-case, we have ∆ Q ( τ + 1) − ∆ Q ( τ ) ≤ .In the second sub-case: SA n ( τ ) is feasible, we have Q k ( τ + 1) = Q k ( τ ) + P Mm =1 α k,m a m ( τ ) − I ( k ∈I n ( τ ) ) β in k,n ( τ ) ( cq ( τ )) . Namely, when not counting the externalarrival, Q k can now possibly decrease if k ∈ I n ( τ ) or it willremain the same if k
6∈ I n ( τ ) . Our goal is to show that either“ ∆ Q ( τ + 1) − ∆ Q ( τ ) ≤ ” or “ ∆ Q ( τ ) < .” To that end,we prove the equivalent statement that ∆ Q ( τ ) ≥ implies ∆ Q ( τ + 1) = ∆ Q ( τ ) . Since SA n ( τ ) is feasible, we have Q k ( τ ) ≥ . Since ∆ Q ( τ ) = Q inter k ( τ ) − Q k ( τ ) ≥ , wehave Q inter k ( τ ) ≥ Q k ( τ ) ≥ . Therefore, if k ∈ I n ( τ ) , then both Q inter k ( τ ) and Q k ( τ ) will decrease by the same amount β in k,n ( τ ) ( cq ( τ )) ; and if k
6∈ I n ( τ ) , both Q inter k ( τ ) and Q k ( τ ) will remain the same except for the external arrival. We thushave Q inter k ( τ + 1) = Q inter k ( τ ) + P Mm =1 α k,m a m ( t ) − I ( k ∈I n ( τ ) ) β in k,n ( τ ) ( cq ( τ )) . Namely, Q inter k ( τ ) will experience thesame change as Q k ( τ ) . As a result ∆ Q ( τ + 1) = ∆ Q ( τ ) .The proof of (33) is complete and the proof of Claim 2 isthus also complete. (cid:4) Proof of Claim 3: If k
6∈ O n ( τ ) , the LHS of (26) is zeroand the inequality always holds. If k ∈ O n ( τ ) , we claim thatat least of the following 5 possible cases is true:1) For all queues k ′ ∈ I n ( τ ) , Q k ′ ( τ ) ≥ and Q inter k ′ ( τ ) ≥ β in k ′ ,n ( τ ) ( cq ( τ )) .2) There exists a queue k ′ ∈ I n ( τ ) with Q inter k ′ ( τ ) <β in k ′ ,n ( τ ) ( cq ( τ )) .3) There exists a queue k ′ ∈ I n ( τ ) with Q k ′ ( τ ) = 0 ; β in k ′ ,n ( τ ) ( cq ( τ )) = 0 ; and ≤ Q inter k ′ ( τ ) .4) There exists a queue k ′ ∈ I n ( τ ) with Q k ′ ( τ ) = 0 ; β in k ′ ,n ( τ ) ( cq ( τ )) = 0 ; and ≤ Q inter k ′ ( τ ) < .5) There exists a queue k ′ ∈ I n ( τ ) with Q k ′ ( τ ) = 0 and β in k ′ ,n ( τ ) ( cq ( τ )) = 1 , and ≤ Q inter k ′ ( τ ) .The reason is as follows. If 1) does not hold, either thereexists a k ′ such that Q inter k ′ ( τ ) < β in k ′ ,n ( τ ) ( cq ( τ )) ; or there existsa k ′ such that Q k ′ ( τ ) = 0 and Q inter k ′ ( τ ) ≥ β in k ′ ,n ( τ ) ( cq ( τ )) .In the former scenario, we have 2). In the latter scenario,we can further partition the event based on the values of β in k ′ ,n ( τ ) ( cq ( τ )) and Q inter k ′ ( τ ) , which leads to 3) to 5).In the case of 1), SA n ( τ ) is feasible in the beginningof time τ . Hence SA n ( τ ) will be activated. Since wenow consider the scenario of k ∈ O n ( τ ) , both Q k ( τ ) and Q inter k ( τ ) increase by the same amount, β out k,n ( τ ) ( cq ( τ )) + P Mm =1 α k,m a m ( τ ) . As a result, ∆ Q ( τ + 1) = ∆ Q ( τ ) . TheLHS of (26) equals to zero and the inequality (26) holds.In the case of 2), the first term of the RHS of (26) is atleast 1 because there exists a queue k ′ ∈ I n ( τ ) such that I ( k ′ ∈ I n ( τ ) ) I ( Q inter k ′ ( τ ) < β in k ′ ,n ( τ ) ( cq ( τ ))) = 1 . Since theLHS of (26) is at most 1, the inequality (26) holds. We canobserve the same relationship between 3) and the second term,4) and the third term, and 5) and the fourth term of the RHSof (26). Since (26) holds for all 5 cases, the proof of Claim 3is complete. (cid:4) Proof of Claim 4:
Notice that joinly Claims 2 and 3immediately give us Claim 4. (cid:4)
D. Proof of Sublinearly Growing Q inter k ( t ) and N NA ,k ( t ) In the next lemma, we will shows that
SCH avg can sublin-early stabilize Q inter k ( t ) and N NA ,k ( t ) for all k . Lemma 8:
Consider any rate vector R such that there exist s c ∈ Λ ◦ for all c ∈ CQ satisfying (8). The proposed SCH avg can sublinearly stabilize q inter k ( t ) , N NA ,k ( t ) , and Q inter k ( t ) forall k .We will prove the sublinear growth of the four quantitiesseparately. Proof of sublinearly growing q k ( t ) and q inter k ( t ) : First, weprovide the conventional stability definition.
Definition 3:
A queue length q ( t ) is stable if lim sup t →∞ t t X τ =1 E {| q ( t ) |} < ∞ . (36)And the network is stable if all the queues are stable.As discussed in Section IV-B, the back-pressure vectorcomputation (4) and the update rule (5) are only based on theexpected input and output service rate matrix B in ( cq ( t )) and B out ( cq ( t )) , which are deterministic matrices. As a result, theycan be viewed as the virtual queue lengths of a deterministicSPN. In the existing proof in [13], it has been shown thatthe virtual queue length q ( t ) of a deterministic SPN can bestabilized by SCH avg . As a result,
SCH avg can also stabilize thevirtual queue length q k ( t ) for all k in the given (0,1) randomSPN.Notice that given the past arrival vectors and the pastand current channel quality, i.e., given cq ( t ) and { a , cq } t − ,the quantity q ( t ) and x ∗ ( t ) is no longer random and is ofdeterministic value, see the update rules of (4) and (5). Thefollowing lemma establishes the connection between q ( t ) and q inter ( t ) . Lemma 9: q ( t ) is the expectation of q inter ( t ) conditionedon { a , cq } t − . That is, q ( t ) = E { q inter ( t ) |{ a , cq } t − } . Proof of Lemma 9:
This lemma can be proven iteratively.When t = 1 , since q ( t ) = q inter ( t ) = , the zero vector,Lemma 9 holds automatically. Suppose Lemma 9 holds forsome t . By comparing (5) and (9), we can see that Lemma 9holds for t + 1 as well. (cid:4) For any k ∈ { , , ..., K } , we square both sides of (10) andwe thus have q inter k ( t + 1) − q inter k ( t ) =( µ out ,k ( t ) − µ in ,k ( t )) − q inter k ( t ) ( µ out ,k ( t ) − µ in ,k ( t )) . Similar to (11) and (12) we can define the average arrivalrate and departure rate of queue k as follows. µ out ,k ( t ) = N X n =1 (cid:16) β in k,n ( cq ( t )) x ∗ n ( t ) (cid:17) ,µ in ,k ( t ) = M X m =1 ( α k,m a m ( t )) + N X n =1 (cid:16) β out k,n ( cq ( t )) x ∗ n ( t ) (cid:17) . (37)By taking the expectation conditioned on the past and currentarrival vectors and past channel quality on both sides untiltime t , we have E { q inter k ( t + 1) |{ a , cq } t } − E { q inter k ( t ) |{ a , cq } t } = E { ( µ out ,k ( t ) − µ in ,k ( t )) |{ a , cq } t }− E { q inter k ( t ) ( µ out ,k ( t ) − µ in ,k ( t )) |{ a , cq } t } = E { ( µ out ,k ( t ) − µ in ,k ( t )) |{ a , cq } t }− q k ( t ) ( µ out ,k ( t ) − µ in ,k ( t )) (38) ≤ C + 2 | q k ( t ) | U, (39)where (38) follows from the observation that q inter k ( t ) is aconstant given { a , cq } t − and µ out ,k ( t ) and µ in ,k ( t ) are theconditional expectation of µ out ,k ( t ) and µ in ,k ( t ) (37) given { a , cq } t ; and (39) follows from defining C to be the upperbound of | µ out ,k ( t ) − µ in ,k ( t ) | and U to be the upper bound of | µ out ,k ( t ) − µ in ,k ( t ) | . Now we take the expectation over allpossible past arrival vectors and past channel quality. E { q inter k ( t + 1) } − E { q inter k ( t ) } ≤ C + 2 U E {| q k ( t ) |} . (40)Eq. (40) also holds if we replace the time index t by τ . Bysumming up (40) (with time index τ ) for τ = 1 to τ = t − and by noticing q inter k (1) = 0 , we have E { q inter k ( t ) } − E { q inter k (1) } = E { q inter k ( t ) } ≤ ( t − C + 2 U t − X τ =1 E {| q k ( τ ) |} . Since q k ( t ) is stable and thus satisfies lim sup t →∞ t P tτ =1 E {| q k ( τ ) |} < ∞ , there exists an L value such that t P tτ =1 E {| q k ( τ ) |} ≤ L for all possible t values. We then have t − E { q inter k ( t ) }≤ C + 2 U t − t − X τ =1 E {| q k ( τ ) |}≤ C + 2 U L. for arbitrary t values.For any arbitrarily given ǫ ′ > , we now apply Markovinequality with the second moment expression to derive Prob ( | q inter k ( t ) | ≥ ǫ ′ t ) ≤ ǫ ′ t E { q inter k ( t ) } ≤ C + 2 U Lǫ ′ t . For any arbitrarily given δ > , let t be the first t such that C +2 ULǫ ′ t < δ . Then we have Prob ( | q inter k ( t ) | ≥ ǫ ′ t ) < δ, ∀ t > t . Thus we have proven the sublinear growth of q inter ( t ) . (cid:4) Before we continue our proofs of sublinearly growing N NA ,k ( t ) and Q inter k ( t ) , we state the following claim first.Define the deficit , D k , for all k as the difference between Q inter k and q inter k . That is, at any time t , D k ( t ) = Q inter k ( t ) − q inter k ( t ) , ∀ k. (41) Claim 5:
For all k , the function D k ( t ) is non-decreasingand it grows sublinearly.The proof of Claim 5 is relegated to Appendix G. We nowcontinue our proofs. Proof of sublinearly growing N NA ,k ( t ) : Recall the def-inition of the null activity at queue k ( k ∈ I n ( t ) , and Q inter k ( t ) < µ out ,k ( t ) ). In the proof of Claim 5, in particular(65), we can see that the null activity occurs at queue k attime t if and only if D k ( t + 1) > D k ( t ) . As a result, N NA ,k ( t ) = t X τ =1 I ( D k ( τ + 1) > D k ( τ )) . C and U exist because µ out ,k ( t ) and µ in ,k ( t ) have bounded support byour definition. Recall that Q inter k ( t ) is an integer-valued random process andso is µ out ,k ( t ) = P Nn =1 (cid:16) β in k,n ( cq ( t )) · x ∗ n ( t ) (cid:17) . As a result,whenever µ out ,k ( t ) − Q inter k ( t ) > , we must have µ out ,k ( t ) − Q inter k ( t ) ≥ . Using this observation and the fact that D k ( t ) is non-decreasing, we have t X τ =1 I ( D k ( τ + 1) > D k ( τ )) ≤ D k ( t + 1) . The above argument implies N NA ,k ( t ) ≤ D k ( t + 1) . Since D k ( t ) grows sublinearly as proven in Claim 5, we have proventhat N NA ,k ( t ) also grows sublinearly. (cid:4) Proof of sublinearly growing Q inter k ( t ) : By (41), Q inter k ( t ) = q inter k ( t ) + D k ( t ) . We have shown that both q inter k and D k ( t ) grow sublinearly,and hence Q inter k also grows sublinearly. (cid:4) The above discussion on q k ( t ) , q inter k ( t ) , N NA ,k ( t ) , and Q inter k ( t ) completes the proof of Lemma 8. E. Proof of Proposition 5
To compare polytopes in Proposition 1 and Proposition 4,we first list all the linear constraints describing each regionseparately. For Proposition 4, the region can be described by(8). Following from Table II, we can explicitly write A and B as follows. To facilitate matrix labeling, we order the 7operations as [ NC1 , NC2 , DX1 , DX2 , PM , RC , CX ] , and orderthe 5 queues as h Q ∅ , Q ∅ , Q { } , Q { } , Q mix i . Let ~p [ c ] △ = ~p ( c ) for all c ∈ CQ be the probability vector which represents thereception status probabilities when the channel quality is c .Given the above definitions, we can write A , B in , B out , andthe average service vector, s c , under channel quality c for any c ∈ CQ as A = (cid:20) (cid:21) T , B in ( c )= p [ c ] d ∨ d p [ c ] d ∨ d p [ c ] d ∨ d p [ c ] d ∨ d p [ c ] d p [ c ] d p [ c ] d p [ c ] d p [ c ] d ∨ d , B out ( c ) = p [ c ] d d p [ c ] d d p [ c ] d d p [ c ] d d
00 0 0 0 p [ c ] d ∨ d , s c = h x [ c ] NC1 x [ c ] NC2 x [ c ] DX1 x [ c ] DX2 x [ c ] PM x [ c ] RC x [ c ] CX i T . As a result, the throughput region in Proposition 4 can beexpressed by a collection of 5+1 linear (in)equalities, wherethe first 5 equalities correspond to the flow-conservation lawof queues 1 to 5 and the 6-th inequalities follows from s c being drawn from the convex hull Λ : That is, X ∀ c ∈ CQ f c (cid:16) x [ c ] NC1 + x [ c ] PM (cid:17) p [ c ] d ∨ d = R , (42) X ∀ c ∈ CQ f c (cid:16) x [ c ] NC2 + x [ c ] PM (cid:17) p [ c ] d ∨ d = R , (43) X ∀ c ∈ CQ f c (cid:16) x [ c ] CX + x [ c ] DX1 (cid:17) p [ c ] d = X ∀ c ∈ CQ f c (cid:16) x [ c ] NC1 + x [ c ] RC (cid:17) p [ c ] d d , (44) X ∀ c ∈ CQ f c (cid:16) x [ c ] CX + x [ c ] DX2 (cid:17) p [ c ] d = X ∀ c ∈ CQ f c (cid:16) x [ c ] NC2 + x [ c ] RC (cid:17) p [ c ] d d , (45) X ∀ c ∈ CQ f c x [ c ] RC p [ c ] d ∨ d = X ∀ c ∈ CQ f c x [ c ] PM p [ c ] d ∨ d , (46) x [ c ] NC1 + x [ c ] NC2 + x [ c ] DX1 + x [ c ] DX2 + x [ c ] PM + x [ c ] RC + x [ c ] CX ≤ , ∀ c ∈ CQ . (47)On the other hand, by Lemma 8 of [15], the polytype inProposition 1 can also be expressed by another collection oflinear (in)equalities: x [ c ]0 + x [ c ]9 + x [ c ]18 + x [ c ]27 + x [ c ]31 + x [ c ]63 + x [ c ]95 ≤ , ∀ c ∈ CQ , (48) y = X ∀ c ∈ CQ f c (cid:16) x [ c ]0 + x [ c ]9 + x [ c ]18 + x [ c ]27 + x [ c ]31 + x [ c ]63 (cid:17) p [ c ] d , (49) y = X ∀ c ∈ CQ f c (cid:16) x [ c ]0 + x [ c ]9 + x [ c ]18 + x [ c ]27 + x [ c ]31 + x [ c ]95 (cid:17) p [ c ] d , (50) y = R + X ∀ c ∈ CQ f c (cid:16) x [ c ]0 + x [ c ]9 (cid:17) p [ c ] d , (51) y = R + X ∀ c ∈ CQ f c (cid:16) x [ c ]0 + x [ c ]18 + x [ c ]27 (cid:17) p [ c ] d , (52) y = X ∀ c ∈ CQ f c (cid:16) x [ c ]0 + x [ c ]9 + x [ c ]18 (cid:17) p [ c ] d ∨ d , (53) y = R + X ∀ c ∈ CQ f c (cid:16) x [ c ]0 + x [ c ]9 (cid:17) p [ c ] d ∨ d , (54) y = R + X ∀ c ∈ CQ f c (cid:16) x [ c ]0 + x [ c ]18 (cid:17) p [ c ] d ∨ d , (55)and y = y ; y = y ; (56) y = y = y = R + R . (57)To prove that the dynamic-arrival stability region in (42)-(47) matches the block-coding capacity in (48)–(57), we needto prove that for any ( R , R ) and the accompanying x [ c ]( · ) and y ( · ) variables satisfying (48) to (57), we can always findout another set of s c = [ x [ c ] NC1 , x [ c ] NC2 , x [ c ] DX1 , x [ c ] DX2 , x [ c ] PM , x [ c ] RC , x [ c ] CX ] variables such that ( R , R ) and s c jointly satisfying (42) to(47). To do so, we will verify that the following one-to-one mapping x [ c ]( · ) satisfies (42) to (47). x [ c ] NC1 = x [ c ]18 , x [ c ] NC2 = x [ c ]9 , x [ c ] DX1 = x [ c ]63 , x [ c ] DX2 = x [ c ]95 ,x [ c ] PM = x [ c ]0 , x [ c ] RC = x [ c ]27 , x [ c ] CX = x [ c ]31 . (58)Ineq. (47) is true as a direct result of (48). We now provethat (42) holds. By (57), we have y = R + R . By (55), wethen have y = R + X ∀ c ∈ CQ f c (cid:16) x [ c ] PM + x [ c ] NC1 (cid:17) p [ c ] d ∨ d = R + R ⇒ X ∀ c ∈ CQ f c (cid:16) x [ c ] PM + x [ c ] NC1 (cid:17) p [ c ] d ∨ d = R , which implies (42). (43) can be proven by symmetric argu-ments. Next we check (46). Again by the fact that y = R + R , we have y = X ∀ c ∈ CQ f c (cid:16) x [ c ] PM + x [ c ] NC1 + x [ c ] NC2 + x [ c ] RC (cid:17) p [ c ] d ∨ d = R + R ⇒ X ∀ c ∈ CQ f c (cid:16) x [ c ] NC1 + x [ c ] RC (cid:17) p [ c ] d ∨ d = R , (59)where (59) follows from substituting (43) into y = R + R .Combining (59) with (42), we have R = X ∀ c ∈ CQ f c (cid:16) x [ c ] NC1 + x [ c ] RC (cid:17) p [ c ] d ∨ d = X ∀ c ∈ CQ f c (cid:16) x [ c ] NC1 + x [ c ] PM (cid:17) p [ c ] d ∨ d ⇒ X ∀ c ∈ CQ f c x [ c ] RC p [ c ] d ∨ d = X ∀ c ∈ CQ f c x [ c ] PM p [ c ] d ∨ d , which implies (46). Finally we check (44) and (45). By (56),we have y = y ⇒ X ∀ c ∈ CQ f c (cid:16) x [ c ] PM + x [ c ] NC1 + x [ c ] NC2 + x [ c ] RC + x [ c ] CX + x [ c ] DX1 (cid:17) p [ c ] d = R + X ∀ c ∈ CQ f c (cid:16) x [ c ] PM + x [ c ] NC2 (cid:17) p [ c ] d ⇒ X ∀ c ∈ CQ f c (cid:16) x [ c ] NC1 + x [ c ] RC + x [ c ] CX + x [ c ] DX1 (cid:17) p [ c ] d = R . (60)Combining (60) and (59), we have R = X ∀ c ∈ CQ f c (cid:16) x [ c ] NC1 + x [ c ] RC + x [ c ] CX + x [ c ] DX1 (cid:17) p [ c ] d = X ∀ c ∈ CQ f c (cid:16) x [ c ] NC1 + x [ c ] RC (cid:17) p [ c ] d ∨ d . (61)Following from the fact that p [ c ] d ∨ d = p [ c ] d + p [ c ] d d , we canrewrite (61) as X ∀ c ∈ CQ f c (cid:16) x [ c ] CX + x [ c ] DX1 (cid:17) p [ c ] d = X ∀ c ∈ CQ f c (cid:16) x [ c ] NC1 + x [ c ] RC (cid:17) p [ c ] d d , which implies(44). (45) can be derived by symmetric argu-ments. Thus we complete the proof of Proposition 5. F. Lemma 10
We use P to denote a finite collection of probabilitydistributions and each distribution is of zero mean and finitesupport. For simplicity, we say P = { P , P , · · · , P K } where K = |P| . Lemma 10:
There exists a fixed constant
C > such thatfor any arbitrary K non-negative integers L , L , ..., L K , thefollowing inequality always holds. Prob ( K X k =1 L k X i =1 X ( k ) i ≥ > C (62)where for any k , the random variables X ( k ) i ∼ P k are i.i.d.for different i values and the random processes { X ( k ) i : i } areindependently distributed for different k values. Proof:
We prove this lemma by induction on the size of P . When K = |P| = 1 , the probability of interest becomes Prob ( P Li X i ≥ where we drop the index k for simplicity.By the central limit theorem, there exists an l such that when L > l , the probability of interest is > / (which can be madearbitrarily close to / but we choose / for simplicity).Choose C = min(min { Prob ( P Ll =1 X i ≥
0) : l ≤ l } , / .We claim that such a C value is strictly positive. The reasonis that min { Prob ( P Ll =1 X i ≥
0) : l ≤ l } 6 = 0 because of theassumptions that X i is zero mean and i.i.d. From the aboveconstruction we have Prob ( L X i =1 X i ≥ > C, ∀ L. (63)We now consider the case of K = |P| ≥ . For anyarbitrarily given L to L K , the probability of interest satisfies Prob ( K X k =1 L X i =1 X ( k ) i ≥ ≥ Prob ( L k X i =1 X ( k ) i ≥ , ∀ k )= K Y k =1 Prob ( L k X i =1 X ( k ) i ≥ . (64)We have shown that for each k , there exists a constant C k > such that Prob ( P L k i =1 X ( k ) i ≥ > C k for any arbitrary L k .Hence the product in (64) is larger than C △ = Q Kk =1 C k forany arbitrary L to L K . Lemma 10 is thus proven. G. Proof of Claim 5
For all k , the reason why D k ( t ) is non-decreasing is because D k ( t + 1) = Q inter k ( t + 1) − q inter k ( t + 1)= (cid:0) Q inter k ( t ) − µ out ,k ( t ) (cid:1) + − (cid:0) q inter k ( t ) − µ out ,k ( t ) (cid:1) = Q inter k ( t ) − µ out ,k ( t ) + (cid:0) µ out ,k ( t ) − Q inter k ( t ) (cid:1) + − (cid:0) q inter k ( t ) − µ out ,k ( t ) (cid:1) = D k ( t ) + (cid:0) µ out ,k ( t ) − Q inter k ( t ) (cid:1) + . (65)We now prove that D k ( t ) grows sublinearly for all k . Define p k ( t ) △ = − q inter k ( t −
1) + µ out ,k ( t − for all t ≥ and p k (1) = − q inter k (1) = 0 . Notice that p k ( t ) grows sublinearly because q inter k ( t ) grows sublinearly and µ out ,k ( t − is bounded. Wenotice that D k ( t ) is the running maximum of p k ( t ) since by(65), D k ( t ) = D k ( t −
1) + (cid:0) µ out ,k ( t − − Q inter k ( t − (cid:1) + = D k ( t −
1) + max { , µ out ,k ( t − − Q inter k ( t − } = max { D k ( t − , − q inter k ( t −
1) + µ out ,k ( t − } (66) = max { D k ( t − , p k ( t ) } = max ≤ τ ≤ t p k ( τ ) , where (66) follows from (41).Recall µ out ,k ( t ) and µ in ,k ( t ) are the expectation of µ out ,k ( t ) and µ in ,k ( t ) , respectively, conditioned on the arrival vectorsand the channel quality until time t . And by (5), we can update q k ( t ) as q k ( t + 1) = q k ( t ) − µ out ,k ( t ) + µ in ,k ( t ) , ∀ k. (67)Define p k ( t ) △ = − q k ( t −
1) + µ out ,k ( t −
1) = − q k ( t ) + µ in ,k ( t − . That is, p k ( t ) is the conditional expectation of p k ( t ) given { a , cq } t − . Define p ′ k ( t ) △ = p k ( t ) − p k ( t ) . That is, p ′ k ( t ) is the difference between the random variable p k ( t ) andits conditional expectation p k ( t ) . Thus far, we have decompose p k ( t ) = p k ( t ) + p ′ k ( t ) as the summation of the average term p k ( t ) and the randomvariation term p ′ k ( t ) , where the latter has zero mean. We nowdefine D ′ k ( t ) to be the running maximum of the p ′ k ( t ) and D k ( t ) to be the running maximum of p k ( t ) . That is, D ′ k ( t ) = max ≤ τ ≤ t p ′ k ( τ ) ,D k ( t ) = max ≤ τ ≤ t p k ( τ ) . In the following, we will prove: Step 1: p k ( t ) is stable and p ′ k ( t ) grows sublinearly; Step 2: D ′ k ( t ) grows sublinearly; andStep 3: D k ( t ) grows sublinearly. Note that by definition, wealways have ≤ D k ( t ) ≤ D ′ k ( t ) + D k ( t ) . As a result, Steps2 and 3 imply D k ( t ) also grows sublinearly. The proof iscomplete.Step 1: p k ( t ) is stable because q k ( t ) is stable and µ in ,k ( t − is bounded. Furthermore, p ′ k ( t ) grows sublinearly from the factthat the summation/difference of one stable queue and onesublinearly stable queue is sublinearly stable . The proof ofStep 1 is complete. (cid:4) Step 2: We now show that D k ( t ) grows sublinearly. Recallthat p k ( t ) is the random variation term with mean zero and D k ( t ) is the running maximum of the random variation. As aresult, in essence, the D k ( t ) is similar to the running maximumof a random walk with zero drift. The following proof isadapted from the standard proof that the running maximumof a zero-drift random walk is sublinearly growing [Chapter4, [20]]. One can easily verify that with bounded initial value, stability impliessublinear stability. Let T ′ k ( b ) △ = min { t ≥ p ′ k ( t ) ≥ b } be the hitting time of p ′ k ( t ) exceeding the threshold b . Claim 6:
There exists
C > such that for all t ≥ , all b > , and all possible past arrival vector realizations andchannel quality realizations { a , cq } t − , we have Prob ( p ′ k ( t ) ≥ b | T ′ k ( b ) ≤ t, { a , cq } t − ) > C. (68) Proof of Claim 6:
Let ∆ µ in ,k ( t ) △ = µ in ,k ( t ) − µ in ,k ( t ) , ∆ µ out ,k ( t ) △ = µ out ,k ( t ) − µ out ,k ( t ) , and ∆ µ k ( t ) △ = ∆ µ in ,k ( t ) − ∆ µ out ,k ( t ) . By (67) and (10), q inter k ( t ) = t − X τ =1 ( µ in ,k ( τ ) − µ out ,k ( τ )) ,q k ( t ) = t − X τ =1 ( µ in ,k ( τ ) − µ out ,k ( τ )) , and q inter k ( t ) − q k ( t ) = t − X τ =1 ∆ µ k ( τ ) . Then by the definitions of p k ( t ) , p k ( t ) , and p ′ k ( t ) , we have p k ( t ) = − t − X τ =1 ( µ in ,k ( τ ) − µ out ,k ( τ )) + µ out ,k ( t − ,p k ( t ) = − t − X τ =1 ( µ in ,k ( τ ) − µ out ,k ( τ )) + µ out ,k ( t − ,p ′ k ( t ) = − t − X τ =1 ∆ µ k ( τ ) + ∆ µ out ,k ( t − . (69)By (69) we have p ′ k ( t ) − p ′ k ( T ′ k ( b ))= t − X τ =1 ∆ µ k ( τ ) + ∆ µ out ,k ( t − ! − T ′ k ( b ) − X τ =1 ∆ µ k ( τ ) + ∆ µ out ,k ( T ′ k ( b ) − = t − X τ = T ′ k ( b ) − ∆ µ k ( τ ) + ∆ µ out ,k ( t − − ∆ µ out ,k ( T ′ k ( b ) − t − X τ = T ′ k ( b ) ∆ µ k ( τ ) + ∆ µ out ,k ( t − µ in ,k ( T ′ k ( b ) − − µ out ,k ( T ′ k ( b ) − Define ∆ˆ µ k ( T ′ k ( b ) − △ =∆ µ in ,k ( T ′ k ( b ) − − µ out ,k ( T ′ k ( b ) − . Thus, we have Prob ( p ′ k ( t ) ≥ b | T ′ k ( b ) ≤ t, { a , cq } t ) ≥ Prob (cid:18) ∆ˆ µ k ( T ′ k ( b ) −
1) + t − X τ = T ′ k ( b ) ∆ µ k ( τ )+ ∆ µ out ,k ( t − ≥ | T ′ k ( b ) ≤ t, { a , cq } t (cid:19) (70)We now notice that in the RHS of (70), there are ( t − T ′ k ( b )+1) summands in the probability expression, one for each τ ∈ [ T ′ k ( b ) − , t − . One can easily verify that conditioning onthe past arrival vectors and past channel quality { a , cq } t , eachsummand is independently distributed. The reason is that whenconditioning on { a , cq } t , both the virtual queue length vector q ( τ ) and back-pressure scheduler become deterministic for all τ = 1 to t , see (2), (4), and (5). As a result, the randomnessof each summand depends only on the realization of β in k,n ( τ ) and β out k,n ( τ ) and they are independently distributed in ourSPN model. Moreover, each summand is also of zero meanand bounded support. The reason is that the definitions of ∆ µ in ,k ( τ ) , ∆ µ out ,k ( τ ) , and ∆ µ k ( τ ) ensure that these randomvariables are of zero mean. Also, since B in ( τ ) and B out ( τ ) areof bounded support, so are ∆ µ in , ( τ ) , ∆ µ out ,k ( τ ) , and ∆ µ k ( τ ) .Obviously the conditional distribution of each of the t − T ′ k ( b ) + 1 summands given { a , cq } t depends on the values of T ′ k ( b ) and t and the realization { a , cq } t . However, we furtherargue that there is a bounded number of distributions, denotedby P , and each of the conditional distribution must be of adistribution P ∈ P regardless what are the values of t , T ′ k ( b ) ,and the realization { a , cq } t . Namely, even though there areinfinitely many ways of having the t , T ′ k ( b ) , and the realization { a , cq } t values, the number of possible distributions for allthe summands is bounded. The reason is that the distributionsof ∆ µ in , ( τ ) , ∆ µ out ,k ( τ ) , and ∆ µ k ( τ ) depend only on whatis the actual schedule at time τ . Since there is only abounded number of possible scheduling decisions, the numberof possible distributions for all the summands is bounded.By Lemma 10 in Appendix F, there exists a C > suchthat (70) > C (71)for all t and all possible past arrival vector realizations andchannel quality realizations { a , cq } t − . The proof of Claim 6is complete. (cid:4) Notice that by Claim 6, there exists C such that for allpossible past arrival vector and channel quality realizations Prob ( p ′ k ( t ) ≥ b | T ′ k ( b ) ≤ t, { a , cq } t − )= Prob ( p ′ k ( t ) ≥ b, T ′ k ( b ) ≤ t |{ a , cq } t ) Prob ( T ′ k ( b ) ≤ t |{ a , cq } t − )= Prob ( p ′ k ( t ) ≥ b |{ a , cq } t − ) Prob ( T ′ k ( b ) ≤ t |{ a , cq } t − ) > C. (72)Meanwhile, since D ′ k ( t ) is the running maximum of p ′ k ( t ) , wehave Prob ( D ′ k ( t ) ≥ b |{ a , cq } t − ) = Prob ( T ′ k ( b ) ≤ t |{ a , cq } t − ) < C Prob ( p ′ k ( t ) ≥ b |{ a , cq } t − ) . (73)Taking the expectation on both sides over all possible pastarrival vectors and past channel quality, we have Prob ( D ′ k ( t ) ≥ b ) < C Prob ( p ′ k ( t ) ≥ b ) . (74)Substituting b by ǫt in the above equation and using the factthat p ′ k ( t ) grows sublinearly, we have proven that D ′ k ( t ) growssublinearly. The proof of Step 2 is complete. (cid:4) Step 3: We now prove the following claim.
Claim 7:
The following two inequalities are true for allpossible realizations.1) D k ( t +1) − D k ( t ) ≤ max { p k ( t +1) − p k ( t ) , } + U ,where U is the supremum over all possible | µ out ,k ( t ) − µ in ,k ( t − | . Note that U always exists since in therandom external arrivals and the random movements ofthe packets all have bounded support and µ in ,k ( t ) and µ out ,k ( t ) are computed from the expected values of therandom packets arrival and departures.2) max { p k ( t + 1) − p k ( t ) , } + U ≤ | p k ( t ) | U + 2 U . Proof of Claim 7:
We first prove 1). There are three possiblecases.Case 1: D k ( t ) ≥ p k ( t + 1) . Since D k ( t ) is the runningmaximum of p k ( t ) , D k ( t + 1) = D k ( t ) in this case. Thusthe left hand side of (i) is zero and the inequality holds.Case 2: D k ( t ) < p k ( t + 1) and p k ( t ) ≥ . By the definition of D k ( t ) , we have D k ( t + 1) = p k ( t + 1) . Also, since D k ( t ) isthe running maximum of p k ( t ) , we have ≤ p k ( t ) ≤ D k ( t ) ,which implies ( p k ( t )) ≤ ( D k ( t )) . Jointly, we thus have D k ( t + 1) − D k ( t ) ≤ p k ( t + 1) − p k ( t ) ≤ max { p k ( t +1) − p k ( t ) , } + U .Case 3: D k ( t ) < p k ( t + 1) and p k ( t ) < . By the definition of U and by (69), we have p k ( t +1) ≤ p k ( t )+ U , which, togetherwith the inequality D k ( t ) < p k ( t + 1) and the definition that D k ( t ) being the running maximum of p k ( t ) , implies D k ( t ) − U ≤ p k ( t ) ≤ D k ( t ) . Since D k ( t ) is always no less than zero, we thus have − U ≤ p k ( t ) < , which in turn implies U − p k ( t ) ≥ . Since D k ( t + 1) = p k ( t + 1) , we now have, D k ( t + 1) − D k ( t ) ≤ p k ( t + 1) ≤ max { p k ( t + 1) − p k ( t ) , } + U . The proof of1) is complete.We now prove 2). Define ∆ p k ( t + 1) △ = p k ( t + 1) − p k ( t ) .Then max { p k ( t + 1) − p k ( t ) , } + U = max { ( p k ( t ) + ∆ p k ( t + 1)) − p k ( t ) , } + U = max { p k ( t )∆ p k ( t + 1) + ∆ p k ( t + 1) , } + U ≤ | p k ( t )∆ p k ( t + 1) | + | ∆ p k ( t + 1) | + U ≤ | p k ( t ) | U + 2 U , where the last inequality follows from rewriting µ in ,k ( τ ) and µ out ,k ( τ ) based on (69) and by the definition of U . (cid:4) Following from Claim 7 and taking the expectation on bothsides over all possible arrival vectors, E { D k ( t + 1) } − E { D k ( t ) } ≤ E {| p k ( t ) |} U + 2 U . Replacing the time index t by τ and then summing up theabove inequality with time index τ ) from τ = 1 to τ = t − ,we then have E { D k ( t ) } ≤ U t − X τ =1 E {| p k ( τ ) |} + 2 U ( t − ⇒ t E { D k ( t ) } ≤ U t − t − X τ =1 E {| p k ( τ ) |} + 2 U . The fact that p k ( t ) is stable implies that there exists an L value such that t − P t − τ =1 E {| p k ( τ ) |} ≤ L for all t . For any ǫ ′ > , δ > , we then apply the Markov inequality, Prob ( D k ( t ) > ǫ ′ t ) ≤ ǫ ′ t E { D k ( t ) } ≤ ǫ ′ t (cid:0) U L + 2 U (cid:1) . Let t be the smallest t such that ǫ ′ t (cid:0) U L + 2 U (cid:1) < δ . Then Prob ( D k ( t ) > ǫ ′ t ) < δ for all t > t , which completes theproof of Step 3. (cid:4) R EFERENCES[1] S.-Y. Li, R. Yeung, and N. Cai, “Linear network coding,”
IEEE Trans.Inf. Theory , vol. 49, no. 2, pp. 371–381, Feb 2003.[2] T. Ho and H. Viswanathan, “Dynamic algorithms for multicast withintra-session network coding,”
Information Theory, IEEE Transactionson , vol. 55, no. 2, pp. 797–815, 2009.[3] S. Katti, H. Rahul, W. Hu, D. Katabi, M. M´edard, and J. Crowcroft,“XORs in the air: Practical wireless network,” in
Proc. ACM SpecialInterest Group on Data Commun. (SIGCOMM) , 2006.[4] Y. Sagduyu, L. Georgiadis, L. Tassiulas, and A. Ephremides, “Capacityand stable throughput regions for the broadcast erasure channel withfeedback: An unusual union,”
Information Theory, IEEE Transactionson , vol. 59, no. 5, pp. 2841–2862, 2013.[5] C.-C. Wang, “On the capacity of 1-to- K broadcast packet erasurechannels with channel output feedback,” IEEE Trans. Inf. Theory ,vol. 58, no. 2, pp. 931–956, Feb 2012.[6] ——, “On the capacity of wireless 1-hop intersession network coding— a broadcast packet erasure channel approach,”
IEEE Trans. onInformation Theory , vol. 58, no. 2, pp. 957–988, Feb 2012.[7] A. Eryilmaz, D. Lun, and B. Swapna, “Control of multi-hop commu-nication networks for inter-session network coding,”
IEEE Trans. Inf.Theory , vol. 57, no. 2, pp. 1092–1110, Feb. 2011.[8] G. Paschos, L. Georgiadis, and L. Tassiulas, “Scheduling with pairwisexoring of packets under statistical overhearing information and feed-back,”
Queueing Systems , vol. 72, no. 3-4, pp. 361–395, 2012.[9] S. A. Athanasiadou, M. Gatzianas, L. Georgiadis, and L. Tassiulas,“Stable and capacity achieving xor–based policies for the broadcasterasure channel with feedback,” in
Information Theory Proceedings(ISIT), 2013 IEEE International Symposium on . IEEE, 2013.[10] L. Georgiadis and L. Tassiulas, “Broadcast erasure channel with feed-back — capacity and algorithms,” in
Proc. 5th Workshop on NetworkCoding, Theory, & Applications (NetCod) , Lausanne, Switzerland, June2009, pp. 54–61.[11] L. Tassiulas and A. Ephremides, “Stability properties of constrainedqueueing systems and scheduling policies for maximum throughput inmultihop radio networks,”
Automatic Control, IEEE Transactions on ,vol. 37, no. 12, pp. 1936–1948, 1992.[12] C.-C. Wang and D. J. Love, “Linear network coding capacity regionof 2-receiver mimo broadcast packet erasure channels with feedback,”in
Information Theory Proceedings (ISIT), 2012 IEEE InternationalSymposium on . IEEE, 2012, pp. 2062–2066.[13] L. Jiang and J. Walrand, “Stable and utility-maximizing schedulingfor stochastic processing networks,” in
Communication, Control, andComputing, 2009. Allerton 2009. 47th Annual Allerton Conference on .IEEE, 2009, pp. 1111–1119.[14] L. Huang and M. J. Neely, “Utility optimal scheduling in processingnetworks,”
Performance Evaluation , vol. 68, no. 11, pp. 1002–1021,2011.[15] C.-C. Wang and J. Han, “The capacity region of 2-receiver multiple-input broadcast packet erasure channels with channel output feedback,”
Information Theory, IEEE Transactions on , 2014. [Online]. Available:http://dx.doi.org/10.1109/TIT.2014.2334299[16] L. Amini, N. Jain, A. Sehgal, J. Silber, and O. Verscheure, “Adaptivecontrol of extreme-scale stream processing systems,” in
DistributedComputing Systems, 2006. ICDCS 2006. 26th IEEE International Con-ference on . IEEE, 2006, pp. 71–71.[17] M. Zaharia, A. Konwinski, A. D. Joseph, R. Katz, and I. Stoica,“Improving mapreduce performance in heterogeneous environments,” in
Proceedings of the 8th USENIX conference on Operating systems designand implementation , 2008, pp. 29–42.[18] L. Georgiadis, M. J. Neely, and L. Tassiulas,
Resource allocation andcross-layer control in wireless networks . Now Pub, 2006. [19] S. Rayanchu, S. Sen, J. Wu, S. Banerjee, and S. Sengupta, “Loss-awarenetwork coding for unicast wireless sessions: Design, implementation,and performance evaluation,” in SIGMETRICS . Annapolis, Maryland,USA, Jun. 2008.[20] R. Durrett,