On Achieving Local View Capacity Via Maximal Independent Graph Scheduling
aa r X i v : . [ c s . I T ] O c t On Achieving Local View Capacity Via MaximalIndependent Graph Scheduling
Vaneet Aggarwal, A. Salman Avestimehr, and Ashutosh Sabharwal
Abstract —“If we know more, we can achieve more.” Thisadage also applies to communication networks, where moreinformation about the network state translates into higher sum-rates. In this paper, we formalize this increase of sum-rate withincreased knowledge of the network state. The knowledge ofnetwork state is measured in terms of the number of hops, h ,of information available to each transmitter and is labeled as h -local view. To understand how much capacity is lost due tolimited information, we propose to use the metric of normalizedsum-capacity, which is the h -local view sum-capacity divided byglobal-view sum capacity. For the cases of one and two-localview, we characterize the normalized sum-capacity for manyclasses of deterministic and Gaussian interference networks. Inmany cases, a scheduling scheme called maximal independentgraph scheduling is shown to achieve normalized sum-capacity.We also show that its generalization for 1-local view, labeledcoded set scheduling, achieves normalized sum-capacity in somecases where its uncoded counterpart fails to do so. I. I
NTRODUCTION
A. Overview
Node mobility in wireless networks leads to constantchanges in network connectivity at long time-scales and perlink channel gains at short time-scales. The optimal rateallocation and associated encoding and decoding rules dependon both the network connectivity and the current channel gainsof all links (commonly referred as network state). However,in large wireless networks, acquiring full network connectivityand state information for making optimal decisions is typicallyinfeasible. Thus, in the absence of centralization of networkstate information, nodes have limited local view of the wholenetwork. As a result, the local view of the nodes are mis-matched and different from local views of other nodes. Thus,each node has potentially a different snapshot of the whole net-work. Due to mismatched local views, nodes’ decisions abouttheir transmission (like rate, power, codebook) and reception(method of decoding) parameters are inherently distributed.The key question then is how do optimal distributed decisions
V. Aggarwal is with AT&T Shannon Labs, 180 Park Ave - Building 103,Florham Park NJ 07932, USA (email: [email protected]). He waswith Department of Electrical Engineering, Princeton University, PrincetonNJ 08544, when this work was done. A. S. Avestimehr is with School ofElectrical and Computer Engineering, Cornell University, Ithaca NY, USA(email: [email protected]). The research of A. S. Avestimehr wassupported in part by the NSF CAREER award 0953117. A. Sabharwal iswith Department of Electrical and Computer Engineering, Rice University,Houston TX 77005, USA (email: [email protected]). The paper was presentedin part at the Allerton Conference on Communication, Control and Computing2009 [9], the IEEE Asilomar Conference on Signals, Systems and Computers2009, the IEEE Conference on Information Sciences and Systems 2010 andthe IEEE International Symposium on Information Theory 2010. perform when compared to the optimal decisions which havefull network state information.We immediately acknowledge the difficulty in answering theabove question. Even with full global information, where eachnode knows the full network connectivity and current stateperfectly, the capacity of general networks is an open problem.In light of that fact, our driving question adds additionalcomplexity to the analysis by asking nodes to rely only on theirlocal views. To make progress, we make several simplifyingassumptions in our choice of network model and the model forlocal view. Even in the simplified model, our analysis leads toseveral significant conclusions as described below.In this paper, we limit our attention to K -user single-hopinterference networks with K transmitters and K receivers.Each transmitter communicates with its receiver in a single-hop fashion but in the process can interfere with an arbitrarynumber of receivers. The special cases include the classic two-user interference network, Z -network, one-to-many, many-to-one and fully-connected interference networks. In this paper,we will consider both the linear deterministic [10, 11] and theGaussian models for the network.To model the local view, we will borrow the concept ofhop distance from networking literature and consider the casewhere each transmitter has a perfect knowledge of all linkswithin h hops from it and has no knowledge of links beyond h hops. As a result, if h is less than the network diameter,a subset of transmitters will end with mismatched knowledgeabout the state of the channels. Since each channel gain canrange from zero to a maximum value, our formulation issimilar to compound channels [1, 2] with one major difference.In the multi-terminal compound network formulations, all nodes are missing identical information about the channelsin the network. In our formulation, the hop-based modelof local view leads to nodes with asymmetric informationabout the channels in the network. Thus to emphasize thatthe lack of knowledge is asymmetric, we have labeled theresulting compound channel capacity formulation as localview capacity . Finally, we assume that the nodes know theconnectivity, i.e., which pairs of the links can exist but may ormay not know the actual value of the channel gains on thoselinks. In graph-theoretic parlance, the nodes are assumed toknow the edges of the graph (i.e. the shape of the network)but not their weights which represent channel gains. This ispartially motivated by the fact that the network connectivityoften changes at a much slower time-scales than the channelgains.Finally, realizing the difficulty of directly characterizingcapacity (sum or the whole region), we propose to study the hops of network knowledge no r m a li ze d s u m - ca p ac it y n e t w o r kd i a m e t e r ? Fig. 1. Increase of normalized sum-capacity with the hops of informationabout the network. best guaranteed ratio of the sum rate with local view to thesum-capacity with full global view at each node. We label thisas normalized sum-capacity, α ∗ ∈ [0 , . As shown in Figure1, our goal is to characterize the normalized sum-capacity asa function of the hops of information about the network thatis available at the nodes. In many cases, it turns out that thenormalized capacity is easier to characterize than the actualcapacity since this involves finding sum-capacity for a smallerrange of the values of channel gains. B. Main Contributions
Our objective is to maximize global sum-rate with mis-matched local views. However, nodes have to base theirdecision only on their local asymmetric views which in turnimplies that their decisions are naturally distributed. Oneintuitive solution is for nodes to coordinate their transmissionssuch that the nodes beyond h hops transmit only if they cancause no interference with h -hop size sub-network and thuseach connected sub-network operates as if it is a networkwith full global information. This is formalized through thenotion of an independent graph, which is defined as a sub-graph which admits a distributed encoding and decodingscheme which achieves same sum-capacity as a scheme withfull global information. We use this intuition to propose maximal independent graph scheduling , where the network isdivided into sub-graphs (equivalently sub-networks) and thesub-graphs are scheduled orthogonally over time. The sub-graphs are chosen such that they are maximal independentgraphs which ensure highest spatial reuse of the users.For one hop information at the transmitters, maximalindependent graphs are equivalent to maximal independentsets (MIS), which are largest subsets with non-interferingtransmitter-receiver pairs. Note that maximal independent setscheduling or maximal weighted independent sets are of-ten the optimal schedules under traditional SINR (Signal toInterference plus Noise Ratio) based protocol models fornetworks [3]. Our results show that the MIS schedule is information-theoretically optimal in several cases. Hence, weprovide an information-theoretic notion of optimality for theMIS scheduling algorithm in those cases.We show that in several cases, a maximal independent graph(MIG) scheduling algorithm achieves the maximum normal-ized sum-rate among all distributed encoding and decodingschemes, when the transmitters have no more than two hopsof channel information. The MIG schedule is shown to beoptimal for most three-user bipartite interference topologies, K -user cyclic chain, K -user d -to-many interference network,etc.However, we show that the MIG schedule is not optimalin general for all network topologies and higher rates can beachieved by exploiting coding. For example, in the case of1-local view in 3-user cyclic chain network, we show thata coded set (CS) schedule, where the coding is performedover two scheduling time-slots, achieves a higher normalizedsum-rate than pure scheduling. In CS scheduling, receivers ofinactive transmitters continue listening and train themselveson the interference caused by other nodes. Then, they use thisinterference in a later slot to aid reliable decoding of their owncodeword. For linear deterministic interference networks of[10] with 1-local view, we also give an algorithm that achievesnormalized sum-capacity. C. Related Work
The work on understanding role of limited network knowl-edge was first initiated in [6, 7], where the authors used amessage-passing abstraction of network protocols to formulatea metric of limited network view at each node in the formof number of message rounds; each message round addstwo extra hops of channel information at the transmitters.The key result was that distributed decisions can be eithersum-rate optimal or can be arbitrarily worse than the global-information sum-capacity. This result was further strength-ened for arbitrary K -user interference network in [9], wherethe authors characterized all network connectivities to allowoptimal distributed rate allocation with two hops of networkinformation at each transmitter. In this paper, we take the nextmajor step in understanding the performance of distributeddecisions. We compute the capacity of distributed decisions forseveral network topologies with one-hop and two-hop networkinformation at the transmitter.The rest of the paper is organized as follows. In SectionII, we give the system and network model, and provide somedefinitions that will be used throughout the paper. We willalso consider an example of Multiple Access Network to gainunderstanding. In Section III, we define maximal independentgraph scheduling and derive the independent graphs in thecases when the transmitters have or hops channel gaininformation. In Section IV, we characterize the cases wheremaximal independent graph scheduling is optimal. In Sec-tion V, we give example where maximal independent graphscheduling is not optimal, and extend the achievable schemewith 1-hop knowledge at transmitters to coded set scheduling.We also give the optimal algorithm with 1-hop knowledgeat the transmitter for the linear deterministic model of [10]. Transmitter 1 Encoder 1Transmitter 2 Encoder 2Transmitter K Encoder K Decoder 2Decoder 1Decoder Kwirelessmedium ˆ m K m d ( Y n | N ′ ,SI ) e ( m | N ,SI ) m e ( m | N ,SI ) d ( Y n | N ′ ,SI ) m K Y n Y n X n X n X nK e K ( m K | N K ,SI ) Y nK d K ( Y nK | N ′ K ,SI ) ˆ m ˆ m Fig. 2. System-level depiction of the problem.
Section VI considers 3 hops of knowledge at the transmittersand Section VII concludes the paper.II. P
ROBLEM FORMULATION
In this section, we will first describe the system andnetwork models. We will then define normalized sum-rateand normalized sum-capacity which will be used to evaluatethe performance with asymmetric network information at thenodes. Finally, we will also formalize the specific notion oflocal view used in this paper to model asymmetric networkinformation.
A. System model
As shown in Figure 2, consider a wireless network with K transmitters and K receivers. Each node in the network iseither a transmitter or a receiver. For each transmitter k , letthe message index m k be encoded as X nk using the encodingfunctions e k ( m k | N k , SI ) , which depend on the local view, N k ,and side information about the network, SI . Only receiver k is interested in message m k . The message is decoded at thereceiver k using the decoding function d k ( Y nk | N ′ k , SI ) , where N ′ k is the receiver local view and SI is the side information. Astrategy is defined as the set of all encoding and decodingfunctions in the network, { e k ( m k | N k , SI ) , d k ( Y nk | N ′ k , SI ) } .We note that the local view at transmitter k and receiver k canbe different, as will be the case in our subsequent development.The relationship between the transmit signals and the receivedsignals is specified by the network model that is described inthe next section. B. Network Model
We will consider two models for interference networks.We use a deterministic model, which was proposed as anapproximation to the Gaussian model in [10] to get insightsand then proceed to Gaussian network model both of whichare described as follows.
1) Linear Deterministic Model [10]:
In a linear determin-istic interference network, the input of the k th transmitter attime i can be written as X k [ i ] = (cid:2) X k [ i ] X k [ i ] . . . X k q [ i ] (cid:3) T , k = 1 , , · · · , K, such that X k [ i ] and X k q [ i ] are the mostand the least significant bits, respectively. The received signalof user j , j = 1 , , · · · , K , at time i is denoted by the vector Y j [ i ] = (cid:2) Y j [ i ] Y j [ i ] . . . Y j q [ i ] (cid:3) T . Associated witheach transmitter k and receiver j is a non-negative integer n kj that represents the gain of the channel between them.The maximum number of bits supported by any link is q = max k,j ( n kj ) . The received signal Y j [ i ] is given by Y j [ i ] = K X k =1 S q − n kj X k [ i ] , (1)where q is the maximum of the channel gains ( i.e. q =max j,k ( n jk ) ), the summation is in F q , and S q − n jk is a q × q shift matrix with entries S m,n that are non-zero only for ( m, n ) = ( q − n jk + n, n ) , n = 1 , , . . . , n jk . We will alsouse X nk , Y nk to denote ( X k , · · · , X kn ) , ( Y k , · · · , Y kn ) . Thenetwork can be represented by a square matrix H whose ( i, j ) th entry is H ij = n ij . We note that H need not besymmetric.
2) Gaussian Model:
In a Gaussian interference network,the inputs of the k th transmitter at time i are denoted by X k [ i ] ∈ C , k = 1 , , · · · , K , and the outputs at j th receiverin time i can be written as Y j [ i ] ∈ C , j = 1 , , · · · , K . Thereceived signal Y j [ i ] , j = 1 , , · · · , K is given by Y j [ i ] = K X k =1 h kj X k [ i ] + Z j [ i ] , (2)where h kj ∈ C is the channel gain associated with eachtransmitter k and receiver j , and Z j [ i ] are additive whitecomplex Gaussian random variables of unit variance. Muchlike the deterministic case, we will use X nk , Y nk to de-note ( X k [1] , · · · , X k [ n ]) , ( Y k [1] , · · · , Y k [ n ]) . Further, the in-put X k [ i ] has an average power constraint of unity, i.e. E ( n P ni =1 | X k [ i ] | ) ≤ , where E denotes the expectationof the random variable.Like the deterministic case, we represent the network by asquare matrix H whose ( i, j ) th entry is H ij = | h ij | and cansimilarly define the set of network states. Thus we will use thematrix H for both the deterministic and the Gaussian model,where the usage will be clear from the context. C. Normalized sum-capacity
As we discussed earlier, at each receiver k , the de-sired message m k is decoded using the decoding function d k ( Y nk | N ′ k , SI ) , where N ′ k is the receiver local view of thenetwork and SI is the side information. The correspondingprobability of decoding error λ j ( n ) is defined as Pr[ m k = d k ( Y nk | N ′ k , SI )] . A rate tuple ( R , R , · · · , R K ) is said tobe achievable if there exists a sequence of codes such thatthe error probabilities λ ( n ) , · · · λ K ( n ) go to zero as n goesto infinity for all network states consistent with the sideinformation. The sum-capacity is the supremum of P i R i overall possible encoding and decoding functions.We will now define normalized sum-rate and normalizedsum-capacity that will be used throughout the paper. Thesenotions represent the percentage of the global-view sum-capacity that can be achieved with partial information aboutthe network. Definition 1.
Normalized sum-rate of α is said to be achiev-able for a set of network states with partial information if thereexists a strategy such that following holds. The strategy yieldsa sequence of codes having rates R i at the transmitter i suchthat the error probabilities at the receiver, λ ( n ) , · · · λ K ( n ) ,go to zero as n goes to infinity, satisfying X i R i ≥ αC sum − τ for all the sets of network states consistent with the sideinformation, and for a constant τ that is independent of thechannel gains but may depend on the side information SI . Here C sum is the sum-capacity of the whole network with the fullinformation. Definition 2.
Normalized sum-capacity, α ∗ , is defined as thesupremum over all achievable normalized sum rates α . Note that α ∗ ∈ [0 , . In [7], we defined the conceptof universal optimality of a strategy. A universally optimalstrategy is the one which achieves α ∗ ( h ) = 1 for a givennetwork. Thus, universal optimality is the special case wherethe distributed scheme achieves global-view sum-capacity in all network states and hence is universally optimal for allnetwork states. D. Local View Based on Hop Distance
We assume that that there is a direct link between eachtransmitter T i and its intended receiver D i . On the otherhand, if a cross-link between transmitter i and receiver j does not exist, then H ij ≡ . For large part, we will treatthe network as a weighted undirected graph, G = ( V , E , W ) ,where transmitters and receivers are the vertices of the graph, V = { T i , D i } , and an edge e ∈ E exists between any twonodes if they have a possibility of non-zero channel gain.In other words, if the channel gain between two nodes isidentically zero, there is no edge between them . Finally, theactual channel gain n ij (for deterministic model) or h ij (forGaussian model) is the edge weight w ( e ) ∈ W . The resultingbipartite graph thus has K vertices and no more than K edges.We realize that the current formulation of distributed en-coding is very general and encompasses a large class of { N k , N ′ k } k and SI . To make progress we will focus on aspecial structure of local view and side information at thenodes, which is largely inspired by common characteristicsof existing network protocols. We will assume that the sideinformation at all the nodes is the network connectivitycharacterized by ( E , V ) . We identify ( E , V ) with the long time-scale characteristics of the network, which changes slowly.However, the network state captured by edge weights W ,which gives the weights of edges is not part in the sideinformation.The local view at the nodes is defined using the metric ofhop count ( h ). For any node, the links that are incident on The model is inspired by fading channels, where the existence of a linkis based on its average channel gain. On the average the link gain may beabove the noise floor but its instantaneous value can be below the noise floor. the node have a distance of -hop. In general, hop-distanceof a link from a node is one plus the minimum amount oflinks traversed starting from the node and terminating at thelink. An example of the minimum distance of the links froma node is shown in Figure 3. We say that there is h -local viewwhen all the transmitters know the weights (equivalently thechannel gains) of those links which are at a distance of h -hops from them while the receivers know the weight of onlythose links which are at most distance of h + 1 hops fromthem. This definition of h -local information is based on ourprior work in [7] where we proposed a multi-round protocolabstraction to show how different nodes have different amountsof network information. In the message-passing abstraction, itwas convenient to have receivers know one more hop than theircorresponding transmitters, which allowed coherent decoding. T D T D T D T D T D T D Fig. 3. The hop-distances of each link from transmitter, T (the dark circle),are labeled above each link. Thus, we will consider the side information SI to be thenetwork connectivity while the local information at each nodeis the h -local information. Thus, each transmitter uses acodebook of rate R i which is a function of network con-nectivity and local channel gain information. A strategy atthe transmitters achieves normalized sum-rate of α if the sumrate achieved is within a constant bits of α times the sumcapacity with global knowledge of all the channel gains in thenetwork for all sets of channel gains possible in the network.As h increases, the normalized sum-capacity increases. When h is the network diameter, which is the maximum hop distancebetween any link and any node, all the nodes have full networkinformation. This is called the global view, since every nodeknows the complete network state, G = ( V , E , W ) . In thissetting, normalized sum-capacity α ∗ = 1 . When h = 0 , noneof the nodes know any weights and thus following compoundchannel arguments [1], α ∗ = 0 since none of the nodes knowany link weight and have to assume that all channel gains arezero. E. A Warmup Example: Multiple Access Network
We start with a simple example to illustrate these concepts.As shown in Figure 4, we consider the K -user Gaussian multi-ple access network with the channel gain from i th transmitterto the receiver being h i such that | h i | = √ SNR i and the Transmitter 1 Encoder 1Transmitter 2 Encoder 2Transmitter K Encoder K Decoder 1 SI = K-user MAC m e ( m ,N ,SI ) m e ( m ,N ,SI ) m K X n X n X nK e K ( m K ,N K ,SI ) Y n SNR SNR d ( Y n ,N ′ ,SI ) ˆ m ,..., ˆ m K N = SNR N = SNR N K = SNR K N ′ = SNR ,..., SNR K SNR K Fig. 4. Example: multiple-access network with 1-hop local information. power constraint at each transmitter being unity. Note that thenetwork diameter is two, which implies 2-local is equivalentto global view implying α ∗ (2) = 1 . Thus the interesting caseis that of 1-local view.We show that when there is -local view, the normal-ized sum-capacity is /K which can be achieved by simplyscheduling one user at a time in a total of K time-slots. It canalso be achieved by letting each user simultaneously send at /K fraction of its direct link capacity.The main challenge is to show the converse. Let K > ,as otherwise the result holds trivially. Assume that normalizedsum-rate of α = (1 /K + ǫ ) is achievable. Then, we should beable to achieve a rate tuple satisfying R i ≥ (cid:18) K + ǫ (cid:19) log(1 + SNR i ) − τ, ∀ ≤ i ≤ K. (3)This is because each node is unaware of the other channelgains. To achieve a normalized sum-rate larger than α , eachuser should send at a rate larger than a fraction α of its channelcapacity up to a difference τ (otherwise in the case when allother channel gains are zero, achievable normalized sum-rate issmaller than α ). Now, we will show that this rate-tuple cannotbe achieved. With the capacity bound of full information, R K ≤ log K X i =1 SNR i ! − K − X i =1 R i (3) ≤ log K X i =1 SNR i ! − (cid:18) K + ǫ (cid:19) K − X i =1 log(1 + SNR i ) + ( K − τ. Since the K th transmitter does not know SNR i for ≤ i ≤ K − , R K < min SNR i , ≤ i ≤ K − " log K X i =1 SNR i ! − (cid:18) K + ǫ (cid:19) K − X i =1 log(1 + SNR i ) + ( K − τ ≤ K log(1 + SNR K ) − ( K − ǫ log(1 + SNR K ) + log( K )+( K − τ (4) For the above to hold, ( K − ǫ log(1+ SNR K ) ≤ log( K )+( K − τ which cannot hold for all SNR K with τ and K independent of SNR K . Thus, α ∗ ≤ K .Since all the links are at-most two hops from each trans-mitter, the normalized sum-capacity in the case when eachtransmitter knows all the links that are at-most two hop distantfrom it is .For the rest of the paper, we will focus on interferencenetworks some examples of which will be defined in the nextsection. T D T D T D T D T D T D T D T D T D T D T D T D (a) (b) Fig. 5. (a) -to-many interference network, and (b) many-to- interferencenetwork with users. F. Examples of Interference Networks
In this paper, some special interference networks will beused as examples. They are defined as follows.
Definition 3. A d -to-many interference network with K users is an interference network specified by E = S Ki =1 { ( T i , D i ) } S S di =1 S Kj =1 { ( T i , D j ) } . This network haslinks from the first d transmitters to all the receivers. Definition 4.
A many-to- d interference network of K users is an interference network specified by E = S Ki =1 { ( T i , D i ) } S S Ki =1 S dj =1 { ( T i , D j ) } . This network haslinks from all transmitters to the first d receivers. Example of -to-many interference network and many-to- interference networks with users are depicted in Figure 5. Definition 5.
A fully-connected interference network with K users is many-to- K interference network with K users whichis also the same as a K -to-many interference network with K users. Definition 6.
A chain of K users is an interference networkdefined by E = S Ki =1 { ( T i , D i ) } S S K − i =1 { ( T i , D i +1 ) } . This network has links from each trans-mitter to its next receiver. A Z-network is a chain of users. Definition 7.
A cyclic-chain of K users is an interferencenetwork defined by E = S Ki =1 { ( T i , D i ) } S S K − i =1 { ( T i , D i +1 ) } S { ( T K , D ) } . This networkis similar to a K − user chain of Definition 6 except thatthe last transmitter interferes with the first receiver, therebymaking the network a circular chain. III. S
UBGRAPH S CHEDULING
In this section, we will present a scheduling-based schemewhich uses partial information at every node. The main ideais to divide the network into smaller disjoint sub-networks,each of which can operate optimally such that the normalizedsum-rate of α ∗ ( h ) = 1 for each sub-network. The choice ofsub-networks thus becomes important and will be addressedin the form of independent sub-graphs as discussed below.We will use the graph-theoretic terminology introducedin Section II-D to describe the scheduling algorithm. Thegraph theoretic formulation will allow us to compare ourresults to existing results in the literature for the specialcase of single-hop local view, as discussed in Section IV.Further, the graph-theoretic formulation will facilitate parallelsbetween our proposed scheduling method and graph-conceptsof chromatic number, again discussed in Section IV.In Section III-A, we will first describe the schedulingalgorithm and derive its achievable normalized sum-rate per-formance for arbitrary hop-view, assuming independent graphsare known. In Section III-B, we will derive the form ofindependent sub-graphs for 1- and 2-local view. An exampleis provided in Section III-C. A. Maximal Independent Graph Scheduling
Following standard graph theory terminology, a subgraph A ⊆ G , is a subset of vertices and edges in G . The complementof A is A c such that ( V , E ) = A ∪ A c . In this section, wewill only consider subgraphs where both transmitter T i and itscorresponding receiver D i are either in the subgraph togetheror in its complement. We will remove this restriction onsubgraphs in Section V to propose a generalization which canachieve strictly higher rates for some networks compared to thefollowing sub-graph schedule. Note that while the graph edgesare weighted with the channel gains, the edge weights willnot play a role in the description of the scheduling algorithm.Hence in our definition of subgraphs, we do not include edgeweights. Since the network connectivity is known as sideinformation to all the nodes and the schedules only dependon the connectivity, each user knows the schedule and hencewhen to transmit or when not to transmit.With the above (restricted) definition of subgraph, any strictsubgraph A ⊆ G represents a valid interference networkwith a reduced number of transmitter-receiver pairs. For thatsubgraph A , the normalized sum-rate α ∗ A ( h ) can be defined,which is the ratio of sum-capacity with h -local view to thesum-capacity with global view ( h = diameter ( A ) ) for network A .Armed with the above framework, we can now defineIndependent Graph Scheduling as follows. Let A , A , . . . , A t be t sub-graphs (not necessarily distinct) of the network G such that for each sub-graph A i , α ∗ A i ( h ) = 1 . Subgraphs forwhich α ∗ A i ( h ) = 1 are called independent subgraphs. Since transmitter-receiver pairs are either part of A i or A ci , eachpair either appears in a subgraph A i or it does not appear in A i . Definition 8 (Independent Graph Scheduling) . IndependentGraph Scheduling parametrized by t independent sub-graphs A , A , . . . , A t uses t time-slots and schedules the sub-graph A i in time-slot i . Define the indicator function j ∈ A i = ( T j ∈ A i T j A i . (5)For any given tuple of independent subgraphs, { A i } ti =1 ,which satisfy α ∗ A i ( h ) = 1 , the next theorem gives the normal-ized sum-rate that can be achieved by sub-graph scheduling. Theorem 1 (Achievable Normalized Sum-rate of Inde-pendent Graph Scheduling) . Independent Graph Schedulingparametrized by t independent sub-graphs A , A , . . . , A t achieves a normalized sum-rate of d/t , where d = min j ∈{ , ,...,K } t X i =1 j ∈ A i . (6) Proof:
Let ( C , · · · , C K ) be any point in the full knowl-edge capacity region. The achievable rate in time-slot i is R ( i ) ≥ P { j }⊆ A i C i − τ i by the choice of subgraphs A i which satisfy α ∗ A i ( h ) = 1 . Note that τ is dependent on i since it can change in each time-slot due to selection ofdifferent subgraphs. Hence, the overall rate is t P ti =1 R i ≥ t P ti =1 P { j }⊆ A i C i − t P ti =1 τ i ≥ dt ( C + · · · + C K ) − t P ti =1 τ i . By the definition of normalized sum-rate, α = d/t .First note that the sub-graphs A i need not be distinct,which allows allocating more than one time-slot to a particularsubgraph if needed. Second, the subgraph set { A i } ti =1 and thenumber of subgraphs t are both design variables and shouldbe chosen to maximize d/t , such that the overall networkrate is maximized. The d/t -maximizing choice of subgraphsis labeled as a maximal independent graph (MIG) schedule.The main idea behind MIG scheduling is to decoupletransmissions of nodes from the unknown part of the network.This is done by switching off some of the flows such thatthe network gets partitioned into disconnected subgraphs.However, switching off flows means potentially lost ratecompared to global-view optimal sum-capacity, so the sub-graphs have to be selected to maximize spatial reuse . That is,this involves operating as many flows as possible in parallelwhile still satisfying α ∗ A i ( h ) = 1 . Such subgraphs are labeled maximal independent graphs and form the core of MIGS. Wecharacterize independent graphs next. B. Identifying Independent Graphs
Since MIG scheduling schedules a subgraph A i satisfying α ∗ A i ( h ) = 1 in time-slot i , we need a characterization ofindependent sub-graphs. The problem turns out to be verychallenging for a general h . We provide complete characteri-zation for two important cases of h = 1 and h = 2 , for both deterministic and Gaussian networks, in the next two theorems.The special case of h = 2 for the deterministic networks waspresented in [9]; in this paper, we provide a tight outer boundand also extend it to Gaussian networks.We note that the sufficient and necessary conditions infollowing two theorems are stated in terms of the graphproperties of G . Theorem 2 uses the node degree, which isthe number of edges incident on the node. Theorem 3 usesthe definitions in Section II-F. Theorem 2 (1-local View Independent Subgraphs) . The nor-malized sum-capacity of a K -user interference network (de-terministic or Gaussian) with 1-local view is equal to one, i.e. α ∗ (1) = 1 , if and only if all the receivers have degree .Proof: We will first show that in a Z-network network, α ∗ ≤ / .For a deterministic network model, assume that a normal-ized sum rate of α is achievable; then R i ≥ αn ii − τ, ∀ ≤ i ≤ . (7)When all the channel gains are n , the condition that data canbe decoded at the intended destinations gives R + R ≤ n. Thus, α (2 n ) − τ ≤ R + R ≤ n, or, (2 α − n ≤ τ. Since this has to hold for all values of n where α and τ areindependent of n , α ≤ / .For a Gaussian network model, if a normalized sum rate of α is achievable; then R i ≥ α log(1 + | h ii | ) − τ, ∀ ≤ i ≤ . (8)Further, when all h = h = h , R + R ≤ log(1 + 2 | h | ) This gives (2 α −
1) log(1 + | h | ) ≤ τ (9)Since this has to hold for all values of | h | where α and τ are independent of h , α ≤ / .This shows that for a Z-cnetwork, α ∗ (1) ≤ / . If thereis a network containing a link from T i to D j for i = j , thenas a genie consider a system of two users i and j where allother links are 0 and known to all. In this two user system,Z-network will be an outer bound and thus α ∗ (1) ≤ / .This proves that if there is a link from T i to D j for i = j , α ∗ (1) ≤ / ; thus proving the theorem.Thus, with 1-local view, the only network that can support α ∗ (1) = 1 is the one where no transmitter interferes with otherreceivers, i.e, a network with K completely isolated flows. Asa result, for a two-user interference network where transmitters can cause interference at other receivers, MIG scheduling willrequire the two flows to operate in a TDMA fashion. This isbecause the transmitters do not know any of the interfering link gains and thus have to optimize for the worst case inour formulation. The worst case network conditions are whenthe interfering channel gains are the same as the direct link( h = h = h ), where the network has only one degree offreedom and each node can thus transmit only half the time [8].Thus, for the two-user case, the above conclusion can bederived from the results in [8]. Theorem 2 is a generalizationto arbitrary K -user interference network.We next provide the characterization of independent sub-graphs for two-local view, h = 2 . Theorem 3 (2-local View Independent Subgraphs) . The nor-malized sum-capacity of a K -user interference network (de-terministic or Gaussian) with 2-local view is equal to one ( i.e. α ∗ (2) = 1 ) if and only if all the connected components areof one of the following forms: a one-to-many interference network a fully-connected interference networkProof: A fully-connected network implies all nodes arewithin two-hops from each other. Thus, in this case, thediameter of such a network is two and thus h = 2 constitutesglobal knowledge. By the definition of normalized sum-rate, α ∗ (2) = 1 for a fully-connected subnetwork.The proof for the case when the connected component isone-to-many interference network is provided in Appendix A.Further, a converse to the statement of the Theorem is alsoprovided in Appendix A. The result was partially presentedat [9] for deterministic network and is extended in thispaper by providing outer bounds on α ∗ for all the three-usertopologies along with the Gaussian extensions.Contrasting Theorems 2 and 3, we see that increasing thelocal horizon from h = 1 to h = 2 increases the numberof networks under which universally optimal performance canbe obtained. While for h = 1 , universal optimality requiredno simultaneous transmissions, the independent subgraphs for h = 2 constitute a richer class. Not only are the fullyconnected interference networks possible (since their diameteris 1 for K = 1 and 2 for K ≥ ), one-to-many subgraphs arealso possible even though their diameter is 4 for K ≥ . Forone-to-many subgraphs, the interfering transmitter is two hopsaway from all nodes and thus has full network knowledge. As aresult, the optimal strategy is to allow K − links to operate attheir near-maximum link capacity and for the interfering flowto adjust its rate to cause no harmful interference (either theinterference is below the noise floor or completely decodableand thus can be cancelled out). This was proved for two-userchain network in [7], and will be extended to a general K inAppendix A. C. An Example
Figure 5(a) gives a case of a six-user 4-to-many interferencenetwork. With 1-local view, the MIG Scheduling algorithmcan be described as follows. Let A = { } , A = { } , A = { } , A = { } , and A = { , } . Note that we have useda shorthand notation in describing these sets; A = { a, b } represents that A is subgraph containing T j , D j for all j ∈ A and all edges between the members of A are also implied by this shorthand notation. We use a five time-slot strategy. In the i th time-slot, users in A i transmit. MIG Scheduling achieves α (1) = 1 / . We will show that this scheduling is optimal inTheorem 4. hops of network knowledge no r m a li ze d s u m - ca p ac it y / / Fig. 6. Normalized sum-capacity vs. h -local information for six-user 4-to-many interference network. With 2-local view, the MIG Scheduling algorithm can bedescribed as follows. Let A = A = A = { , , , } , A = { , , } , A = { , , } , A = { , , } , and A = { , , } .We use a seven time-slot strategy. In the i th time-slot, usersin A i transmit. MIG Scheduling achieves α (2) = 4 / . Wewill show that this scheduling is optimal in Theorem 6. Thenormalized sum-capacity for increasing local information isdepicted in Figure 6.IV. O PTIMALITY OF
MIG S
CHEDULING
Now a natural question is: How good is the MIG schedul-ing? In this section, we address the question and show thatMIG scheduling is optimal for several K -user networks with1-local and 2-local view. Our results are limited to 1- and 2-local view only because independent graphs are known onlyfor these two cases.The reader will immediately note that much like capacityanalyses of different multi-terminal networks (multiple access,interference network, etc), our proofs are largely taken on acase by case basis. At the current moment, there appears tobe no general algorithmic procedure to derive general capacityregion and as a result, we do not have an algorithmic procedureto derive normalized sum-capacity. However, we do note thatwe can derive normalized sum-capacity in our formulationfor many cases while the global-view sum-capacity is stillunknown. A. 1-local View
Our main result in this section is determining the networksfor which MIGS with one-local view is optimal. Recall thatone-local view MIGS is equivalent to scheduling of non-interfering links in the network. The key step in the proof is the derivation of an upperbound. The proof for the upper bound follows the followingrecipe in all cases for the deterministic model (the Gaussianmodel is similar).1) When any transmitter sees the direct channel capacity as n , it has to send at a rate R i ≥ αn − τ . This is because ifthe rate is < αn − τ , then when all other channel gainsare , the worst-case guarantee of α is not achievable.2) Find an upper bound on global-view sum capacity whenall the channel gains in the network are n . Let the globalsum capacity be bounded from above by cn + d forsome constants c and d which are independent of n . Forexample, one trivial outer bound is Kn for all K -usernetworks. To yield a useful bound, it is important to findthe smallest constant c .3) Combining Steps 1 and 2, an outer bound on α as α ≤ c/K can be obtained where K is the number of users.The proof follows the above three steps for each subset ofusers, and chooses the tightest outer bound thereafter.Let A ⊆ G represents a valid interference network with | A | ≤ K transmitter-receiver pairs. Suppose the global viewsum capacity of A when all the link capacities in A are n isupper bounded by c A n + d A for some constants c A and d A which may depend on A but remain constant with changing A . Then, α ≤ min A c A | A | . (10)The following theorem characterizes the cases where wecan prove that MIG Scheduling is optimal. Theorem 4 (1-Local View Optimality of MIG Scheduling) . MIG scheduling is optimal with -local view when the networkis of one of the following forms, and we also derive α ∗ foreach case. All the three user interference networks, except the 3-user cyclic-chain, (In Figure 7, α ∗ (1) = 1 in (a), α ∗ (1) = 1 / in (b), (c), (d), (e), (f), (g), (j), and (k),and α ∗ (1) = 1 / in (h), (l), (m), (n), (o), and (p)) chain interference network, ( α ∗ (1) = 1 / for K ≥ ), d-to-many interference network, ( α ∗ (1) = d +1 for K ≥ and ≤ d < K ), many-to-d interference network, ( α ∗ (1) = d +1 for K ≥ and ≤ d < K ), fully-connected interference network, ( α ∗ (1) = K ),Further, the achievability holds with τ = 0 for both thedeterministic and the Gaussian models.Proof:
1) For a three user interference network, we will considerall the possible networks as shown in Figure 7 up torelabeling of the users. In networks (b), (c), (d), (e), (f),(g), (j), and (k), the same upperbound as that for theZ-network ( α ∗ ≤ / ) holds since the channel gainsexcept those that forms a Z-network can be made 0 andare known to all as a genie (Since there is only 1-localview, existence of zero capacity links do not help getmore information about the network). Further, this can be achieved with MIG scheduling with two time-slots.For (h), (l), (m), (n), (o), and (p), consider the topologyequivalent to (h) by setting all other network gains to 0and make this global information. With this, the outerbound for the case (h) holds for all these cases. In thecase (h), suppose all the network gains are the same.Then, D decodes the message of T . Thus, D willbe able to decode the message of T as well as T since after decoding message of T and subtracting, theequivalent signal is the same as that at D . Similarly, D will be able to decode the message of T , T aswell as T since after decoding the message of T andsubtracting, the equivalent signal is same as that at D .Thus, the normalized sum capacity is upper bounded bythat of the Multiple Access Network to D thus giving / as an upper bound. Further, / can be achievedusing MIG scheduling, scheduling the three users inthree different time-slots.2) The achievability of / follows by using two time-slots,scheduling odd numbered users in the first time-slot andeven numbered in the second time-slot, while the outerbound for the Z-network also holds here by the samearguments as in the previous part. Thus, α ∗ (1) = 1 / .3) As an outer bound, consider d + 1 users containingthe first d users that are interfering at all receiversand the d + 1 th user as one other user. Consider therest of the direct channel gains as 0 and known toall. In this case, it is easy to see that when all thechannel gains are equal, D d +1 has to decode all themessages, thus upper bounding the normalized sumcapacity by that of the Multiple Access Network tothis receiver. For achievability, consider MIG schedulingusing d + 1 time-slots with A = { } , · · · , A d = { d } and A d +1 = { d + 1 , · · · , K } . Note that this extendsthe example of a -to-many interference network with 6users with 1-local view provided in Section III-C.4) As an outer bound, consider d + 1 users containingthe first d users that are receiving interference fromall transmitters and the d + 1 th user as one other user.Assume the rest of the direct channel gains are 0 andknown to all. In this case, it is easy to see that whenall the channel gains are equal, D has to decode allthe messages, thus upper bounding the normalized sumcapacity by that of the Multiple Access Network tothis receiver. For achievability, consider MIG schedulingusing d + 1 time-slots with A = { } , · · · , A d = { d } and A d +1 = { d + 1 , · · · , K } .5) When all the channel gains are equal, each destinationhas to decode all the messages and is thus upper-bounded by that of the Multiple Access Network, giving /K as the upper bound. This is achievable using MIGscheduling, scheduling each user in a separate time-slot.Thus, maximal scheduling of non-interfering links can beinformation-theoretically optimal for many networks. The the-orem only gives sufficient conditions and thus not a sharp char-acterization of all networks which can be operated optimally with scheduling. However, observing the class of networksgiven in the theorem, it appears that MIG scheduling mightbe optimal for a large class of networks. We, thus, explorethe connection further in the next section and also discuss therelationship with graph coloring. B. 1-Local View: Relation to Maximal Independent SetScheduling
For one-local view, the MIG scheduling strategy reduces tomaximal independent set scheduling (MIS scheduling) that canbe described as follows. An independent set A i ⊆ { , · · · , K } is a set that contains mutually non-interfering nodes. A maxi-mal independent set (MIS) is an independent set A i such that A i ∪{ x } is not an independent set for any x ∈ { , · · · , K }\ A i .Using t time-slots, a maximal independent set A i is scheduledin each time-slot such that min i t t X j =1 i ∈ A j is maximized over the choice of t and A · · · , A t . When auser is scheduled, it sends at the direct channel rate (and usespower of in a Gaussian network). The resulting strategyachieves a normalized sub-rate of α = min i t P tj =1 i ∈ A j .This is similar to the following vertex coloring algorithm. Torelate to vertex coloring, we will need the concept of conflictgraph [4, Chapter 2.2] derived from G as follows. Considera graph C with K vertices (half as many as present in G ),where two vertices i and j are connected if there is an edgebetween T i and D j or between T j and D i in G . Suppose thatthere are t colors, labeled , , · · · , t . We assign k ≤ t ofthese colors to each vertex in C such that the sets of colorsassociated with two vertices connected by an edge are disjoint.In conventional graph coloring [5], each vertex has only onecolor and the objective is to assign a color to each vertex suchthat adjoining vertices have different colors. In contrast, thegeneralized set coloring algorithm can assign multiple colorsto each vertex as long as the color sets for adjoining verticesare disjoint. This is similar to fractional coloring consideredin [12]. The best set coloring corresponds to MIS scheduleand maximizes k/t with k and t as variables. The schedulingalgorithm uses t time-slots and schedules the vertices withcolor i in the i th time-slot.This algorithm is similar to Maximal Weight IndependentSet Scheduling in [3] except that the weights are decided notby the queue lengths, but by the weights that maximize theminimum proportion each link is used.A k -fold coloring of a graph is an assignment of sets of size k to vertices of a graph such that adjacent vertices receivedisjoint sets. A t : k -coloring is a k -fold coloring out of t available colors. The k -fold chromatic number ξ k is the least t such that a t : k -coloring exists. Note that MIS Schedulingachieves α = max k ∈ N kξ k , where ξ k is the k -fold chromaticnumber of the conflict graph. The following theorem gives anoptimality condition of MIS Scheduling algorithm in terms ofthe k -fold chromatic number of the conflict graph. Theorem 5 (1-Local View Optimality of MIS Scheduling) . If the conflict graph of an interference network has k -fold T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D T D (a) (b) (c) (d) (e) (f)(g) (h) (i) (j) (k) (l)(m) (n) (o) (p) Fig. 7. All possible canonical network topologies in a three-user interference network. chromatic number of at most k for some k ∈ N , then theMIS scheduling algorithm is optimal, i.e. achieves normalizedsum-capacity with 1-local view.Proof: If the k -fold chromatic number in the conflictgraph is k , then there is no link between any T i and D j for j = i . Since this connectivity satisfies the condition of α ∗ (1) = 1 , the theorem holds.If the k -fold chromatic number in the conflict graph k <ξ k ≤ k , then there is at-least one link between T i and D j for some j = i . In this case, α ∗ (1) ≤ / by the samearguments as in Theorem 2. Further, this can be achieved byMIS scheduling. Corollary 1.
Chain interference networks, different configura-tions of two-user interference networks, 1-to-many and many-to-1 interference networks are some special cases that havechromatic number ≤ in the conflict graph. Moreover, thenormalized sum capacity is the inverse of the -fold chromaticnumber of the conflict graph in these cases.C. 2-Local View We start with a theorem which provides sufficient conditionsunder with MIG scheduling is two-local view optimal.
Theorem 6 (2-Local View Optimality of MIG Scheduling) . MIG Scheduling achieves normalized sum-capacity with -local view when the network is of one of the following forms.We also derive their normalized sum-capacity. Two user interference network, ( α ∗ (2) = 1 ), Chain interference network, ( α ∗ (2) = 2 / for K > ), d-to-many interference network, ( α ∗ (2) = d d − for ≤ d < K and K > ), many-to-one interference network, ( α ∗ (2) = K − K − for K > ), fully-connected interference network, ( α ∗ (2) = 1 ).Proof:
1) In this case, there are four configurations formed byexistence or non-existence of the cross links and in allthese configurations, the result follows from Theorem 3.2) The outer bound of topology (f) in Appendix A holdsin this case by assuming all other channel gains to be 0and known to all. For achievability, the MIG schedulingalgorithm can be described as follows. Let A j = { i + j : i ∈ Z , i + 1 ≤ K } for j = 1 , , . According to theMIG scheduling algorithm, three time-slots are used andusers in A i use a strategy that achieves α (2) = 1 inthe i th time-slot. The MIG scheduling strategy achieves α (2) = 2 / .3) Let d > because for d = 1 the statement holds byTheorem 3. For the outer bound, consider a d +1 user d-to-many interference network. Suppose that there existsa scheme achieving normalized sum capacity of α . Wefirst prove the result for the deterministic model. For user d + 1 , since it does not know any other direct channelgain, it has to use R d +1 ≥ αn − τ when it sees that allthe channel gains within 2 hops have capacity equal to n . Suppose that all other direct links have capacity of n while all other cross links have zero capacity. Then, all R i ≤ (1 − α ) n + τ for i ∈ [1 , d ] yielding that sum rate ≤ ( d − ( d − α ) n + ( K − τ . This sum rate has to beat-least α ( dn ) − τ . Since this holds for all n , α ≤ d d − .Similar proof holds in the Gaussian model as follows.For user d + 1 , since it does not know any other direct channel gain, it has to use R d +1 ≥ α log(1 + SNR ) − τ when it sees that all the channel gains within 2 hopshave channel gain equal to √ SNR . Suppose that all otherdirect links have capacity of √ SNR while all other crosslinks have zero capacity. Then, all R i ≤ (1 − α ) log(1 + SNR ) + τ + 1 for i ∈ [1 , d ] yielding that sum rate ≤ ( d − ( d − α ) log(1 + SNR ) + ( K − τ + d . This sumrate has to be at-least α ( d log(1 + SNR )) − τ . Since thisholds for all SNR , α ≤ d d − .For achievability, consider d − time-slots in which thefirst d − time-slots only users to d transmit. In theremaining d time-slots one user among the first d and allthe users > d transmit making it an equivalent one-to-many configuration (or, A = · · · = A d − = { , · · · , d } and A d − j = { j, d + 1 , · · · , K } for j = 1 , · · · , d ).Thus, this is MIG scheduling with each user scheduledin d time-slots achieving α ∗ (2) = d d − . Note that thisextends the example of a -to-many interference networkwith 6 users with 2-local view provided in Section III-C.4) Suppose that a normalized sum rate of α can beachieved. We first consider a deterministic model. R K >αn KK − τ since the K th user has to send at thisrate when all other direct channel gains are 0 and arenot known to user K . Now, suppose all the channelgains are n . In this case, R i < (1 − α ) n + τ for ≤ i ≤ K − . Thus, the sum rate achieved isless than ( K − − α ) n + ( K − τ + n . Thissum rate has to be at-least α ( K − n − τ . Since thishas to hold for all n , α ≤ K − K − . For a Gaussianmodel, R K ≥ α log(1 + | h KK | ) − τ since the K th user has to send at this rate when all other directchannel gains are 0 and are not known to user K . Now,suppose all the channel gains are √ SNR . In this case, R i ≤ (1 − α ) log(1 + SNR ) + τ + 1 for ≤ i ≤ K − .Thus, the sum rate achieved is ≤ ( K − − α ) log(1+ SNR ) + ( K − τ + n + K − . This sum rate has tobe at-least α ( K −
1) log(1 +
SNR ) − τ . Since this hasto hold for all SNR , α ≤ K − K − .For achievability, consider the data transfer over K − time slots. In the timeslot i satisfying ≤ i ≤ K − users i and K transmit. They form a Z-network anduse the optimal strategy for this channel with partialinformation. In the remaining K − timeslots, users , · · · , K − transmit at full rate. Let ( R , R , · · · , R K ) be any point in the global information capacity region.In the i th time-slot where ≤ i ≤ K − , sum rate of atleast R i + R K can be achieved while in the remaining K − timeslots, sum rate of at least P ≤ i ≤ K − R i canbe achieved. Thus, the sum-capacity with a factor of K − K − can be achieved.5) In this case, the condition of α ∗ = 1 is satisfied byTheorem 3 thus proving the statement.Note that for all the cases in the statement of Theorem 6,we have characterized normalized sum-capacity in the caseof 1-local and 2-local view. For a fully-connected interfer-ence network, larger subgraphs increased α ∗ (2) = 1 from α ∗ (1) = 1 /K . For a d -to-many interference network, one-to-many configurations that satisfy α ∗ (2) = 1 could be exploitedto get α ∗ (2) = d d − from α ∗ (1) = 1 / . With one-localview, only single user encoding and decoding operations areperformed while with 2-local view, optimal encoding anddecoding operations for one-to-many interference network andfully-connected network need to be performed.We consider the sixteen network configurations shown inFigure 7 for the two-local view separately in the followingtheorem. The next theorem shows that MIG Scheduling isnormalized sum-capacity achieving for 12 out of 16 canonicalcases. hops of network knowledge no r m a li ze d s u m - ca p ac it y / / Fig. 8. Normalized sum-capacity vs. h -local information for cases (e), (f),(h), (i), (j), (m), (n) in Figure 7. Theorem 7 (2-Local View Optimality of MIG Scheduling inThree-user Interference Network) . MIG Scheduling is optimalwith 2-local view when the three-user interference network isone of the following types in Figure 7: (a), (b), (c), (d), (e),(f), (h), (i), (j), (m), (n), (p).Proof:
For cases (a), (b), (c), (d), and (p), α ∗ = 1 byTheorem 3. For the remaining cases, the outer bounds of / hold as shown in Appendix A. The achievability follows bychoosing A = { , } , A = { , } , and A = { , } . Thenormalized sum-capacity with h -local view for varying h inthese remaining cases is depicted in Figure 8Here, we do not prove the optimality of MIG scheduling forthe remaining four cases. We conjecture that the outer boundis tight in cases (g) and (k). The achievability would requirethe capacity region in these cases to give better schemes, andis left as future work.V. O PTIMALITY OF
MIG S
CHEDULING : E
XTENSION OF
MIG S
CHEDULING WITH LOCAL V IEW
Is MIG scheduling always optimal? In this section, we willillustrate an example where MIG scheduling is not optimal.This example will use 1-local view and achieve a normalizedsum capacity better than MIS scheduling (MIG schedulingwith 1-local information). This gives a way to extend the MIS scheduling with 1-local information to involve codingacross the time-slots and hence we define a new strategy calledCoded Set (CS) scheduling. This will be followed by somecases when this algorithm is optimal. Finally, we will findnormalized sum-capacity for linear deterministic interferencenetworks.
A. An Example Where MIS Scheduling is not Optimal T D T D T D T D T D T D h X [1] + Z [1] h X [2] + Z [2] + − + h X [2] − Z [1] + Z [2] Fig. 9. Two time-slots for CS scheduling. The transmitters with a tick signtransmit, the second user repeats X ( X [1] = X [2] ) in the two time-slots. We will now illustrate the only case when MIS Scheduling isnot optimal in a 3-user interference network, which is a cyclic-chain interference network. The MIS Scheduling algorithmuses three time-slots, scheduling user i in time-slot i . Thus,MIG Scheduling achieves α (1) = 1 / (Note that there areonly 3 independent sets consisting of individual users and thusoptimality of / using MIS scheduling is straightforward).We will now describe another strategy for this example, whichuses two time-slots as follows (and depicted in Figure 9). Themain idea is to perform coding across time. In the first time-slot, we schedule A = { , } and in second time-slot, weschedule A = { , } such that the codeword of the seconduser is repeated in the two time-slots. All the users sendat the rate equal to the direct link capacity to the intendedreceiver ( n ii in the deterministic and log(1 + | h ii | ) in theGaussian model). In the Gaussian model, power of is usedat the first two transmitters while power is used for the thirdtransmitter. Note that this does not effect average power sincethis transmitter will be used half the time. We will now showthat the data can be decoded at the intended receivers. Thefirst receiver can decode its data in the first time-slot sinceit receives no interference. The second receiver can similarlydecode the data in the second time-slot. The third receiver onthe other hand subtracts the data received in the first time-slotfrom that in the second time-slot which gives a interference-free direct signal which can be decoded; double power levelat the third transmitter is used since the noise power will alsobe double the single slot noise power. Thus, all the receiverscan decode the data and this strategy achieves α (1) = 1 / .This example motivates an extension of the MIS Schedulingalgorithm to involve coding. This new scheduling algorithm iscalled Coded Set Scheduling (CS Scheduling) which will bedescribed in the next subsection. B. Definition of CS Scheduling
In this subsection, we will define the CS algorithm for thedeterministic model and the Gaussian model separately. In thissection, we will only consider subgraphs A ⊆ G with a setof transmitters T i and all the receivers { D , · · · , D K } in thesubgraph because we do not want to throw away any receivedsignal. Let the in-degree at D i be denoted by d i . Suppose thateach transmitter generates k independent codewords (The rateof these codewords will be n ii for the deterministic model,and log(1 + | h ii | /b i ) for the Gaussian model where b i willbe defined in the Gaussian subsection below). Let S i,j be avector of time-slots in which transmitter T i is transmitting the j th codeword. Note that each time-slot should be used at atransmitter T i for only one codeword, thus giving S i,u and S i,v disjoint for u = v . Thus, in time-slot u , the subgraph A u used has transmitters T i where i satisfies S i,j ⊇ { u } for some ≤ j ≤ k . The sets S ij and thus A u , t and k are all designvariables for the CS scheduling algorithm that satisfy someconditions on the constraint matrix, which is defined next. D F = Signal Part k × t ( i, j ) entry is 1 iffcodeword i sent in time j First Interferer k × t ( i, j ) entry is 1 iffcodeword i sent in time j Second Interferer k × t ( i, j ) entry is 1 iffcodeword i sent in time j Fig. 10. Constraint matrix with t columns M , , · · · M ,t where each columnrepresents the different codewords being sent for the direct signal and theinterferers. We form a binary constraint matrix F i of size kd i × t at eachreceiver i which is defined as follows. The constraint matrixhas d i blocks of size k × t where the top block correspondsto the transmitted signal from T i while the rest belong to thedifferent transmitters causing interference at D i as depicted inFigure 10. In each k × t subpart of this matrix, only the entries ( j, S i,j ) are 1 for all ≤ j ≤ k . Suppose that the t columns ofthe constraint matrix are denoted as M i, , · · · M i,t respectively.The constraints that the constraint matrix has to satisfy aredifferent for linear deterministic and Gaussian network model,and is explained below separately for the two cases.
1) Linear Deterministic Model:
For a linear deterministicnetwork model, CS Scheduling can be described as follows.Suppose that a kd i × t matrix with the top k × k part asan identity and rest of the elements can be formed bychoosing each column j as P tl =1 a jl M i,l where a l ’s are binaryand addition is binary addition. If such a transformation existat destination i , this configuration is feasible at vertex i . Ifthe assignment of S ij is feasible at each vertex, this strategy achieves α of k/t . The strategy that achieves the maximum k/t is called Coded Set (CS) Scheduling.The scheduling algorithm uses t time-slots. Each user forms k independent codewords at rate n ii . User i transmits code-word j in time-slots corresponding to S i,j . It is easy to see thatthe data can be decoded at the receivers. The constraint matrixreduction represents that all the k independent codewords canbe decoded in the presence of the interference from othertransmitters.
2) Gaussian Model:
For a Gaussian network model, CSScheduling can be described as follows. Suppose that a kd i × t matrix with the top k × k part an identity and rest of theelements can be formed by choosing each column j as P tl =1 a jl M i,l where a l ∈ R and addition is real addition. Ifsuch a transformation exist at destination i , this configurationis feasible at vertex i . If the assignment of S ij is feasibleat each vertex, this strategy achieves α of k/t . The strategythat achieves the maximum k/t is called Coded Set (CS)Scheduling.Note that a jl can be chosen to be for j > k . The matrixformed by a jl satisfies (cid:2) M i, · · · M i,t (cid:3) a · · · a k · · · ... . . . ... ... . . . a t · · · a kt · · · = (cid:20) I k
00 0 (cid:21) (11)Since this is an under-determined system, the following is asolution. a · · · a k · · · ... . . . ... ... . . . a t · · · a kt · · · = (cid:2) M i, · · · M i,t (cid:3) † (cid:20) I k
00 0 (cid:21) , (12)where A † represents pseudo-inverse of matrix A . Let b i =max kl =1 P tm =1 a lm , where i represents that the constraintmatrix is formed for receiver i .User i forms k independent codewords at rate log(1 + SNR i /b i ) . User i transmits codeword j in the time-slots in S i,j with power SNR i . It is easy to see that the data can be decodedat the receivers and the strategy would achieve α = k/t with τ ≤ α P Ki =1 log( b i ) which is independent of channel gains.This can be further optimized in certain cases by changingthe corresponding rates and powers. The constraint matrixspecifies the interference and the data at each user; existenceof a ij ’s represents that the data can be decoded in the presenceof interference. The value of b i represents that while decodingthe data, the noise gets added up which has to be compensatedby the decrease in rate. Since this rate gap is not a functionof channel gains, we get a constant τ that is independent ofthe channel gains. C. Optimality of CS Scheduling
In this subsection, we prove that CS Scheduling is optimalin a K -user cyclic chain, while MIS scheduling is not in anodd user cyclic chain. Theorem 8 (1-local View Optimality of CS Scheduling) . CSscheduling is optimal with 1-local view for a K -user cyclicchain, while MIS is not for odd K ≥ . Further, an achievablestrategy with τ = 0 is possible in this case for both thedeterministic and the Gaussian model.Proof: For a K -user cyclic chain, an outer bound of / holds from the Z-network by the same arguments asin Theorem 2. If the number of users is even, then usingtwo time-slots and scheduling even and odd users as in MISscheduling will be optimal. If the number of users is odd,consider two time-slots. In the first time-slot, all odd userstransmit while in the even time-slot all even users transmitexcept that the last user also transmits but it repeats the datain the previous time-slot.In a deterministic model, all the users send at full rate ( n ii for T i ) and the data can be proved to be decoded.For a Gaussian network, user i sends at a rate of log(1 + | h ii | ) . All the users except the first use power of to sendthe data while the first transmitter uses power of .The data can be decoded in the same way as explained fora 3-user cyclic chain thus proving the result. D. Normalized Sum-Capacity of Linear Deterministic Net-works with 1-local View
We noted that MIS is not always optimal. Thus, a ques-tion arises as to what is the optimal algorithm with -localview. In this subsection, we answer this question for lineardeterministic interference networks. We first start with somedefinitions. Definition 9.
A binary model of a given interference networkis a linear deterministic model with channel gains of links in E equal to and all the rest equal to . Definition 10.
Symmetric capacity of an interference networkis the maximum r such that rate pair ( r, r, · · · , r ) is inthe capacity region of the interference network with the fullinformation of network and channel gains. We will now show that the normalized sum-capacity of anylinear deterministic interference network with 1-local view isthe symmetric capacity of the binary model of that interferencenetwork in the following theorem.
Theorem 9 (Normalized Sum-Capacity of Linear Determin-istic Networks with 1-local View) . The normalized sum-capacity of any linear deterministic interference network with1-local view is the symmetric capacity of the binary model ofthat interference network.Proof:
Let C s be the symmetric capacity of the binary model of thatinterference network, and α ∗ be the normalized sum-capacityof the interference network.We will first show that α ∗ ≤ C s . Suppose that a transmittersees the direct channel gain to its intended receiver is m .Then, the rate chosen by the transmitter is at-least α ∗ m − τ .When all the links in the interference network have gain of m , each user transmits at a rate R i ≥ α ∗ m − τ . Since all the channel gains are m , the symmetric rate of α ∗ − τ /m shouldbe achievable on the binary model for the interference network(since what is done on m levels can be done on 1 level throughtime-extension). Since m is arbitrary and thus can be taken tobe large enough, the rate pair ( α ∗ , α ∗ , · · · , α ∗ ) should be inthe capacity region for the binary model for the interferencenetwork which gives α ∗ ≤ C s .We will now show that α ∗ ≥ C s which will prove thestatement of the theorem. To prove this, we will use theoptimal strategy for the binary model that achieves symmetriccapacity C s and use it in the original linear deterministicinterference network. We use the symmetric capacity achievingscheme for the binary model of interference network (thatachieves symmetric capacity C s ) at all the bit levels of theoriginal linear deterministic interference network. The sumrate achieved is at-least α times the sum of all direct channelcapacities and hence normalized sum rate of α is achievable.We only need to prove that the data can be decoded at thereceivers. To see this, note that every receiver is receiving at-most the same interference as in the case of the binary modeland hence can fake other interference and still decode the data.VI. B EYOND LOCAL V IEW
The problem of finding independent sub-graphs is open ingeneral, while we have provided cases for h = 1 or . In thissection, we will see some cases when α ∗ (3) = 1 . Theorem 10 (3-local View Independent Subgraphs) . Normal-ized sum-capacity of a K -user interference network with 3-local view is equal to one ( i.e. α ∗ (3) = 1 ) if the network hasall its connected components satisfying any of the following many-to- d interference network d -to-many interference network All configurations in three user interference networks inFigure 7 except for (g).Proof:
In all the cases except cases (f) and (g) in Figure7, 3-local view is the global view. Thus, we only need to showthe result in case (f). The capacity region in this case is knownexactly for deterministic region [7] which will be used to provethis case.The deterministic network capacity region for a three userdouble Z interference network is the set of nonnegative rates ( R , R , R ) satisfying [7] R i ≤ n ii , i = 1 , , R + R ≤ max( n , n , n , n + n − n ) R + R ≤ max( n , n , n , n + n − n ) R + R + R ≤ max( n , n ) + ( n − n ) + + max( n , n − n ) . Since all the transmitter have 3-hops of channel gain infor-mation, we will use the following strategy. The first and thethird transmitter send at rate n ii while the second transmitterknows all the channel gains and thus backs off so that receiver2 can decode the data and receiver 3 is also able to decode. 1) n ≤ n : In this case, the second user do not transmiton the lower n levels and uses a strategy for thelower Z-network consisting of the second and the thirduser with equivalent channel gain between the secondtransmitter-receiver pair being n − n . Thus, thefollowing sum rate can be achieved. R ach = n + max ( n − n , n , min( n , n + n − n ) ,n + n − n − n ) . Using the above capacity region, this achievable sum-rate can also be shown to be optimal.2) n ≥ n and n ≤ n − n : In this case, thesecond transmitter does not receive any interference andthus the sum rate of n +max( n , n , min( n , n + n ) , n + n − n ) can be achieved which is alsooptimal.3) n ≥ n , n − n ≤ n ≤ n , and n ≤ n :In this case, the second transmitter sends at lower min( n − n , ( n − n ) + ) . This is also optimal.4) n ≥ n , n − n ≤ n ≤ n , and n > n :In this case, the second user can send data on the levelswhich are interfering at the second receiver such thatthey are also repeated in the lower n − n levels. Thus,rate of min(( n − n ) , ( n − n ) + + min( n , n − n )) can be supported at the second transmitter.5) n ≥ n , n ≥ n , n ≤ n , and n ≤ n − n : In this case, the second transmitter does not sendat interfered n + n levels and sends at rate ( n − n − n ) + .6) n ≥ n , n ≥ n , n ≤ n , and n − n ≤ n ≤ n + n − n : In this case, the secondtransmitter only sends at lower ( n − n ) + levels.7) n ≥ n , n ≥ n , n ≤ n , and n ≥ n + n − n : In this case, the second transmitter sends doesnot send at n levels producing interference and hencesends at a rate of ( n − n ) + .8) n ≥ n , n ≥ n , n ≥ n , and n ≤ n − n :In this case, the second transmitter does not transmiton n levels at which it receives interference and n levels at which it causes interference and this use a rateof n − n − n .9) n ≥ n , n ≥ n , n ≥ n , and n − n ≤ n ≤ n + n − n : In this case, the secondtransmitter sends at lower n − n levels and top min( n − n , n − n ) levels.10) n ≥ n , n ≥ n , n ≥ n , and n + n − n ≤ n ≤ n : The second transmitter sends at arate of n − max( n , n ) since some of the levelsat which interference is caused at the third receiver canbe repeated at the n levels at which it is receivinginterference such that they can be decoded at the thirdreceiver.11) n ≥ n , n ≥ n , n ≥ n , and n ≥ n : The second transmitter sends at a rate of n − max( n , ( n + n − n ) + ) since ( n + n − n ) + are the effective levels that the second user produces interference to the third user and the same arguments asin the previous case apply. It can also be shown that thisresults in the maximum sum rate.The results can be extended to Gaussian network as in shownin Appendix B.We see that for a general proof of achievability, we need tofind the general capacity region. This is hard in general andis the reason that understanding for general number of hops isopen. Note that a graph with very few links and large numberof links will be known completely within a small numberof hops. However, there are cases in the middle where thecapacity region may be required so as to say if α ∗ = 1 or not.For example, in 3-user interference network, we only neededto consider 2 cases for 3-local view. We resolved the case (f),while case (g) is still open. We conjecture that α ∗ (3) = 1 forany 3-user interference network. However to prove this, weneed a strategy for (g) which achieves the sum-capacity andis still open. VII. C ONCLUSIONS
In this paper, we give a framework for optimality ofdistributed decisions. The optimality is measured in termsof normalized sum-capacity which is the best worst caseguarantee of the distributed decisions. We gave an achievablealgorithm called maximal independent graph scheduling, andcharacterized its performance in several examples. We foundthis algorithm to achieve normalized sum-capacity in severalcases, while we also show that this algorithm is not alwaysoptimal. We also find the normalized sum capacity of lineardeterministic interference networks with 1-local view.A
PPENDIX AU NIVERSALLY O PTIMAL S TRATEGIES WITH HOPS IN T HREE U SER T OPOLOGIES
We first note that for
K < , all the topologies have con-nected components that satisfy the property in the statementof the theorem and thus the result holds trivially.In a three-user interference network, there are at-most sixcross links, existence or non-existence of which gives rise to = 64 cases. Some of the cases are topologically equivalent(up to relabeling of users) and hence that will reduce the totalnumber of possibilities considered in this chapter to the sixteenthat are shown in Figure 7. (e) : Suppose that the normalized sum rate of α can beachieved. Further, suppose that the second user sees thechannel gains as n = n = n = n . In this case, therate allocated by the second user has to be at-least αn − τ for some τ independent of n . This is because the achievablesum rate has to be at-least αC sum − τ even if n = n = 0 .Now suppose all the channel gains are equal to n . In this casesince R + R ≤ n , we have R ≤ (1 − α ) n + τ . Further since R + R ≤ n , we have R + R + R ≤ (2 − α ) n + τ . Thissum rate has to be at-least α (2 n ) − τ since the sum capacityis n . Thus, αn − τ ≤ (2 − α ) n + τ or (3 α − n ≤ τ . If α − > , n can be chosen large enough to not satisfy theinequality. So, the inequality can be satisfied for all n onlywhen α − ≤ which gives α ≤ / . (f) : Suppose that the normalized sum rate of α can be achieved.Further, suppose that the first user sees the channel gains as n = n = n = n . In this case, the second user will sendat rate ≥ αn − τ if n = 0 by the same arguments as in part(e) which implies R ≤ (1 − α ) n + τ . Now suppose all thechannel gains are equal to n . In this case since R + R ≤ n ,we have R + R + R ≤ (2 − α ) n + τ . This sum rate hasto be at-least α (2 n ) − τ since the sum capacity is n . Thus, αn − τ ≤ (2 − α ) n + τ or (3 α − n ≤ τ . Similar argumentsas in (e) yields α ≤ / . (g) : Suppose that the normalized sum rate of α can beachieved. Further, suppose that the first user sees the channelgains as n = n = n = n , n = 2 n . In this case, R + R ≤ n gives that if R = x , R + R ≤ n − x . If n = 0 , the first user should give a strategy such that ( R , R ) satisfy R + R ≥ nα − τ giving R ≤ n (1 − α ) + τ . Nowsuppose that n = n = 2 n giving R + R ≤ n . Wethus have R + R + R ≤ n (2 − α ) + τ . Since this has tobe at-least nα − τ , using similar arguments as in (e) yields α ≤ / . (h) : Suppose that the normalized sum rate of α can beachieved. Further, suppose that the third user sees the channelgains as n = n = n = n . In this case, the third userwill send at rate ≥ αn − τ if n = n = 0 by the samearguments as in part (e). Further suppose n = n = n , n = 0 which implies R ≤ (1 − α ) n + τ . Since R + R ≤ n ,we have R + R + R ≤ (2 − α ) n + τ . This sum rate hasto be at-least α (2 n ) − τ since the sum capacity is n . Thus, αn − τ ≤ (2 − α ) n + τ or (3 α − n ≤ τ . Similar argumentsas in (e) yields α ≤ / . (i) : Suppose that the normalized sum rate of α can be achieved.Further, suppose that the first user sees all the channel gains intwo hops equal to n . In this case, if n = 0 , the second userwill have to send at rate ≥ αn − τ and thus R ≤ (1 − α ) n + τ .Further suppose all the channel gains are equal to n whichimplies R + R ≤ n thus giving R + R + R ≤ (2 − α ) n + τ .This sum rate has to be at-least α (2 n ) − τ since the sumcapacity is n . Thus, αn − τ ≤ (2 − α ) n + τ or (3 α − n ≤ τ . Similar arguments as in (e) yields α ≤ / . (j) : The same steps as in (i) yield α ≤ / . (k) : Let n = 0 be the global information. Applying the samesteps as in (g) for the other channel gains gives α ≤ / . (l) : Suppose that the normalized sum rate of α can be achieved.Further, suppose that the second user sees the channel gainsas n = n = n = n , n = 2 n . In this case, we get R ≤ n (1 − α ) + τ as in (g). Now suppose that n = n =2 n , n = 0 giving R + R ≤ n . Using similar argumentsas in (g) yields α ≤ / . (m) : Let n = 0 be the global information. Applying samesteps for the remaining channel gains as in (i) yields α ≤ / . (n) : Suppose that the normalized sum rate of α can beachieved. Then, if the first user sees all the channel gains as n in two hops, R ≥ αn − τ . Suppose that n = n = n , n = n = 0 . In this case, R + R + R ≤ (2 − α ) n + τ and since it has to be at-least αn − τ for all n , α ≤ / . (o) : Suppose that the normalized sum rate of α can beachieved. Further, suppose that the third user sees the channelgains as n = n = n = n , n = 2 n , n = 0 . In this case, we get R ≤ n (1 − α ) + τ as in (g). Now suppose that n = n = n = 2 n giving R + R ≤ n . Using similararguments as in (g) yields α ≤ / .We will now consider K > . Consider that there exista connected component with K > users which is notin the one-to-many configuration or in the fully-connectedconfiguration. Then, two cases arise:1) There exists a transmitter (say T ) which has degree d satisfying < d < K .2) All the transmitter nodes have degrees or K , suchthat the number of nodes n having degree K satisfy < n < K .For the first case, take the nodes , · · · , d as the nodes whosereceivers are connected to T . Now, there exist a transmitter-receiver pair among d +1 , · · · , K whose transmitter or receiveris connected to any of the nodes , · · · , d . Choose any suchpair and call it pair d + 1 . The receiver of d + 1 is notconnected to transmitter 1. Now if the receiver of the firstnode is connected to the transmitter of d + 1 , then choosethe nodes , , d + 1 and assume that the direct link of allother users is zero and this information is given as a genieto all the nodes. This creates a genie-aided system in whichthe nodes , and d + 1 have the uncertainties about allthe links connecting them and know 2-local view amongthese links only. In this genie-aided system, there does notexist any universally optimal strategy, thus proving the claim(since it makes a connected three-user component which isnot in the one-to-many configuration or in the fully-connectedconfiguration). If pair d + 1 is not connected to pair , let ussay it is connected to pair ≤ j ≤ d . Then, choosing nodes , j , d + 1 and repeating the same argument as above provesthe statement.For the second case, choose the three nodes as any twonodes in which the transmitter has degree K and one in whichthe transmitter has degree 1. Repeating the above genie-aidedproof for these three nodes proves the theorem.This completes the proof that there does not exist a uni-versally optimal strategy for a topology that does containa connected component which is not in the one-to-manyconfiguration or in the fully-connected configuration.It is easy to see that there exists a strategy with α = 1 if all the connected components of the topology are in one-to-many configuration or fully-connected configuration. Forfully-connected components, all the nodes know their con-nected components and thus each node in the component canuse the optimal strategy for its component. For the one-to-many components, each of the users whose transmitters havedegree 1 send at rate equal to the rate that the direct channelcan support and the remaining user knows all the channelgain and adjust its rate correspondingly. Assume that it is aone-to-many component of L users with the first transmitterhaving degree L . The above strategy achieves a sum rate of R sum = P Li =2 n ii + P n i =1 | U k | =0 , where | U k | is the numberof users potentially experiencing interference from the k th signal level of first transmitter which is the same as the sumcapacity with global channel information in [11].We now see the steps extended to a Gaussian networkmodel. For this, we only consider case (e). Extension of the remaining steps is similar and is thus omitted. (e) : Suppose that a normalized sum rate of α can be achieved.Further, suppose that the second user sees the channel gains as h = h = h = √ SNR . In this case, the rate allocated bythe second user has to be at-least α log(1+ SNR ) − τ for some τ independent of n . This is because the achievable sum rate hasto be at-least αC sum − τ even if h = h = 0 . Now supposeall the channel gains are equal to √ SNR . In this case since R + R ≤ log(1 + SNR ) + 1 , we have R ≤ (1 − α ) log(1 + SNR ) + τ + 1 . Further since R + R ≤ log(1 + SNR ) + 1 ,we have R + R + R ≤ (2 − α ) n + τ + 2 . This sum rate hasto be at-least α (2 log(1 + SNR )) − τ since the sum capacityis at-least SNR ) . Thus, α log(1 + SNR ) − τ ≤ (2 − α ) log(1+ SNR )+ τ +2 or (3 α −
2) log(1+
SNR ) ≤ τ +2 .If α − > , SNR can be chosen large enough to not satisfythe inequality. So, the inequality can be satisfied for all
SNR only when α − ≤ which gives α ≤ / .For the achievability, sum capacity can be achieved for afully-connected interference network since every user knowsthe global network state. For one-to-many network, the re-sult in [11] gives that all users except the first using rate (log( SNR i )) + and the first user backing off will achieve asum rate within K − bits of the sum capacity.A PPENDIX BP ROOF FOR G AUSSIAN N ETWORK FOR HOP OPTIMALITYOF Z- CHAIN IN T HEOREM
The capacity region for the three-user Z-chain interferencenetwork is upper bounded by the following regions [7]. (Wewill use | h ii | = SNR i and | h ij | = INR j for j = i in thisAppendix. Further note that although in [7], it is mentionedthat h ii and h ij are positive reals, the outer bound proofextends to general channel gains using same arguments.)1) INR ≥ SNR and INR ≥ SNR : In this case, an outerbound on the rate region is given as follows. R ≤ log(1 + SNR ) (14a) R ≤ log(1 + SNR ) (14b) R ≤ log(1 + SNR ) (14c) R + R ≤ log(1 + SNR + INR ) (14d) R + R ≤ log(1 + SNR + INR ) (14e)2) INR ≥ SNR and INR ≤ SNR : In this case, an outerbound on the rate region is given as follows. R ≤ log(1 + SNR ) (15a) R ≤ log(1 + SNR ) (15b) R ≤ log(1 + SNR ) (15c) R + R ≤ log(1 + SNR + INR ) (15d) R + R ≤ log(1 + SNR )+ log(1 + SNR INR ) (15e)Further, if ( INR + 1) INR ≤ SNR R + R + R ≤ log(1 + SNR INR )+ log(1 + INR + SNR ) (16) else if ( INR + 1) INR ≥ SNR R + R + R ≤ log(1 + INR + SNR )+ log(1 + INR ) . (17)3) INR ≤ SNR and INR ≥ SNR : In this case, an outerbound on the rate region is given as follows. R ≤ log(1 + SNR ) (18a) R ≤ log(1 + SNR ) (18b) R ≤ log(1 + SNR ) (18c) R + R ≤ log(1 + SNR )+ log (cid:18) SNR INR (cid:19) (18d) R + R ≤ log(1 + SNR + INR ) (18e)4) INR ≤ SNR and INR ≤ SNR : In this case, an outerbound on the rate region is given as follows. R ≤ log(1 + SNR ) (19a) R ≤ log(1 + SNR ) (19b) R ≤ log(1 + SNR ) (19c) R + R ≤ log(1 + SNR )+ log (cid:18) SNR INR (cid:19) (19d) R + R ≤ log(1 + SNR )+ log (cid:18) SNR INR (cid:19) (19e)Further, if ( INR + 1) INR ≤ SNR R + R + R ≤ log(1 + SNR )+ log (cid:18) SNR INR (cid:19) + log (cid:18) SNR INR (cid:19) (20)else if ( INR + 1) INR ≥ SNR R + R + R ≤ log(1 + SNR )+ log(1 + INR + SNR ) . (21)With this, we will now show that the achievability iswithin at-most 4 bits from the outer bound. We will assume R = log(1 + SNR ) , R = log(1 + SNR
INR INR ) . Secondtransmitter will choose a rate backing off to other users as willbe shown in the following cases. A. INR ≤ SNR In this case, the second assumes
SNR ′ = SNR INR and usethe strategy for Z-network in [7] consisting of only second andthird users. Using this, it is easy to show that the achievablerate is as follows in the two cases as described below.1) If INR ≤ SNR INR , a rate of log(1 + SNR ) + log(1 + SNR INR ) + log(1 + SNR INR ) − can be achieved.2) If INR ≥ SNR INR , a rate of log(1 + SNR ) + log(1 + SNR INR ) − can be achieved.Thus, sum-capacity within 2-bits can be achieved. B. INR ≥ SNR , SNR ≤ INR SNR In this case, the second user uses the same strategy as thelower Z-network consisting of second and third users since thedestination will be able to decode receiver 1’s data treating thedata of second user as noise. Thus, sum-capacity within 2 bitscan be achieved. C. INR ≥ SNR , SNR ≥ INR SNR , SNR ≤ INR , INR ≤ SNR In this case, the second user makes a codebook of rate min(log(1 +
INR ) − log(1 + SNR ) , log(1 + SNR INR )) andtransmits it using a power level of INR . This can be decodedjointly with the data from the first transmitter since R ≤ log(1 + INR ) (22a) R ≤ log(1 + SNR INR ) (22b) R + R ≤ log(1 + SNR INR + INR ) (22c)For the outer bound, if INR ≥ SNR , R + R is within1 bit and so is R thus giving sum-capacity within 2 bits.If INR ≤ SNR , it can be shown that achievability is againwithin bits in all the cases. D. INR ≥ SNR , SNR ≥ INR SNR , SNR ≤ INR , INR ≥ SNR We divide this case into three sub-cases.1)
INR ≥ SNR : In this case, the second transmitter usesa single codebook of rate min(log(1 + SNR + INR ) − log(1 + SNR ) , log(1 + INR SNR )) and sends at power of . In this case, the data can be jointly decoded at thesecond and the third destination. Note that sum capacitycan be achieved within bit in this case.2) INR ≤ SNR , SNR (1 + SNR INR ) ≤ INR : In thiscase, the second transmitter forms a single code-bookof rate log(1 + SNR INR ) and sends at a power of INR .The second receiver does joint decoding while the thirdreceiver treats the message from the second transmitteras noise. This achieves the sum capacity within bit.3) INR ≤ SNR , SNR (1 + SNR INR ) ≥ INR : In thiscase, second transmitter forms a single code-book of rate (log( INR SNR )) + and sends at a power of SNR ( INR SNR − .The second receiver does joint decoding while the thirdreceiver treats the message from the second transmitteras noise. This achieves the sum-capacity within bits. E. INR ≥ SNR , SNR ≥ INR SNR , SNR ≥ INR , INR ≤ SNR , ( INR + 1) INR ≤ SNR In this case, the second transmitter forms a single code-bookof rate log(1+
INR + SNR ) − log(1+ INR ) − log(1+ SNR ) and sends at a power of INR . The second receiver does jointdecoding while the third receiver treats the message from thesecond transmitter as noise. This achieves the sum capacitywithin bits. F. INR ≥ SNR , SNR ≥ INR SNR , SNR ≥ INR , INR ≤ SNR , ( INR + 1) INR ≤ SNR , SNR (1 + SNR INR ) ≥ INR If INR ≥ SNR , the second transmitter turns off andachieves within bits of the sum capacity. So, we will onlyconsider the case when INR ≤ SNR . In this case, the secondtransmitter forms a single code-book of rate log( INR SNR ) andsends at a power of SNR ( INR SNR − . The second receiver doesjoint decoding while the third receiver treats the message fromthe second transmitter as noise. This achieves the sum capacitywithin bits. G. INR ≥ SNR , SNR ≥ INR SNR , SNR ≥ INR , INR ≤ SNR , ( INR + 1) INR ≤ SNR , SNR (1 + SNR INR ) ≤ INR In this case, the second transmitter forms a single code-bookof rate log(1 +
SNR INR ) and sends at a power of INR . Thesecond receiver does joint decoding while the third receivertreats the message from the second transmitter as noise. Thisachieves the sum capacity within bits. H. INR ≥ SNR , SNR ≥ INR SNR , SNR ≥ INR , INR ≥ SNR , ( INR + 1) INR ≤ SNR In this case, the second transmitter forms two codebooks andsends a superposition of these codebooks. The first codebookhas rate R c = log(1 + INR INR + SNR (1+ INR ) ) which istransmitted with a power of INR INR . The second codebookhas a rate of R p = log(1 + SNR INR ) − log(1 + SNR ) whichis transmitted with a power of INR . The second receiverdecodes these two messages and the message of the firsttransmitter jointly. The third receiver decodes R c treating restas noise, and then decodes the data of third user treating R p as noise. This achieves sum capacity within bits. I. INR ≥ SNR , SNR ≥ INR SNR , SNR ≥ INR , INR ≥ SNR , ( INR + 1) INR ≥ SNR , SNR (1 + SNR INR ) ≥ INR In this case, the second transmitter forms two codebooksand sends a superposition of these codebooks. The firstcodebook has rate R c = (min(log(1 + SNR INR ) , log(1 + INR INR + SNR (1+ INR ) )) − + which is transmitted with apower of INR INR . The second codebook has a rate of R p =(log( INR SNR ) − + and is transmitted at a power of INR .The third receiver decodes R c treating rest as noise, andthen decodes the data of third user treating R p as noise.The second receiver decodes the three codebooks, two of thesecond transmitter and one of the first jointly. This achievessum capacity within bits. J. INR ≥ SNR , SNR ≥ INR SNR , SNR ≥ INR , INR ≥ SNR , ( INR + 1) INR ≥ SNR , SNR (1 + SNR INR ) ≤ INR , INR ≤ SNR In this case, the second transmitter forms two codebooks andsends a superposition of these codebooks. The first codebookhas rate R c = (min(log(1 + INR ) − log(1 + SNR ) , log(1 + INR INR + SNR (1+ INR ) )) − + which is transmitted with a power of INR INR . The second codebook has a rate of R p =log(1 + SNR INR ) and is transmitted at a power of INR .The third receiver decodes R c treating rest as noise, andthen decodes the data of third user treating R p as noise.The second receiver decodes the three codebooks, two of thesecond transmitter and one of the first jointly. This achievessum capacity within bits. K. INR ≥ SNR , SNR ≥ INR SNR , SNR ≥ INR , INR ≥ SNR , ( INR + 1) INR ≥ SNR , SNR (1 + SNR INR ) ≤ INR , INR ≥ SNR In this case, the second transmitter forms a single code-book of rate log(1 + min(
SNR , INR SNR , SNR + INR − SNR SNR )) and sends at a power of . The second and the receiver doesjoint decoding. This achieves the sum capacity within bit.R EFERENCES[1] D. Blackwell, L. Breiman, and A. J. Thomasian, “The capacity of aclass of channels,”
Ann. Math. Stat. , vol. 30, pp. 1229-1241, Dec. 1959.[2] A. Raja, V. M. Prabhakaran, and P. Viswanath, “The Two User GaussianCompound Interference Channel,”
IEEE Transactions on InformationTheory, pp. 5100-5120, Nov 2009.[3] L. Tassiulas and A. Ephremides, “Stability properties of constrainedqueueing systems and scheduling policies for maximum throughput inmultihop radio networks,”
IEEE Transactions on Automatic Control , pp.1936-1948, 1992.[4] L. Meng, A. Zipf, and S. Winter, “Map-based mobile services: design,interaction, and usability.” Springer; 1 edition, 2008.[5] M. Kubale, “Graph Colorings”. American Mathematical Society, 2004.[6] V. Aggarwal, Y. Liu, and A. Sabharwal, “Message Passing in DistributedWireless Networks,” in Proc.
IEEE Internarional Symposium on Infor-mation Theory , Jun-Jul 2009, Seoul, Korea.[7] V. Aggarwal, Y. Liu, and A. Sabharwal, “Sum-capacity of interferencechannels with a local view: Impact of distributed decisions,” submittedto
IEEE Transactions on Information Theory , Oct 2009, available atarXiv:0910.3494v1.[8] H. Sato, “The capacity of the Gaussian Interference channel using Stronginterference”,
IEEE Transactions on Information Theory , vol. IT-27, pp.786-788, Nov. 1981.[9] V. Aggarwal, S. Avestimehr, and A. Sabharwal, “Distributed UniversallyOptimal Strategies for Interference Channels with Partial MessagePassing,” in Proc. Allerton Conference on Communication, Control, andComputing,
Monticello, IL, Sept-Oct 2009.[10] A. S. Avestimehr, S. N. Diggavi, and D. N. C. Tse, “Wireless networkinformation flow: a deterministic approach,” submitted to
IEEE Transac-tions on Information Theory , Aug 2009, available at arXiv:0906.5394v2.[11] G. Bresler, A. Parekh and D. Tse, “The approximate capacity of themany-to-one and one-to-many Gaussian interference channels,” submit-ted to
IEEE Transactions on Information Theory , Sept 2008, availableat arXiv:0809.3554v1.[12] B. Hajek and G. Sasaki, “Link scheduling in polynomial time,”