Outer Bounds for Multiple Access Channels with Feedback using Dependence Balance
aa r X i v : . [ c s . I T ] M a y Outer Bounds for Multiple Access Channels withFeedback using Dependence Balance ∗ Ravi Tandon Sennur Ulukus
Department of Electrical and Computer EngineeringUniversity of Maryland, College Park, MD 20742 [email protected] [email protected]
October 29, 2018
Abstract
We use the idea of dependence balance [1] to obtain a new outer bound for the ca-pacity region of the discrete memoryless multiple access channel with noiseless feedback(MAC-FB). We consider a binary additive noisy MAC-FB whose feedback capacity isnot known. The binary additive noisy MAC considered in this paper can be viewedas the discrete counterpart of the Gaussian MAC-FB. Ozarow [2] established that thecapacity region of the two-user Gaussian MAC-FB is given by the cut-set bound. Ourresult shows that for the discrete version of the channel considered by Ozarow, this isnot the case. Direct evaluation of our outer bound is intractable due to an involvedauxiliary random variable whose large cardinality prohibits an exhaustive search. Weovercome this difficulty by using functional analysis to explicitly evaluate our outerbound. Our outer bound is strictly less than the cut-set bound at all points on thecapacity region where feedback increases capacity. In addition, we explicitly evaluatethe Cover-Leung achievable rate region [3] for the binary additive noisy MAC-FB inconsideration. Furthermore, using the tools developed for the evaluation of our outerbound, we also explicitly characterize the boundary of the feedback capacity region ofthe binary erasure MAC, for which the Cover-Leung achievable rate region is knownto be tight. This last result confirms that the feedback strategies developed in [4] forthe binary erasure MAC are capacity achieving. ∗ This work was supported by NSF Grants CCF 04-47613, CCF 05-14846, CNS 07-16311 and CCF 07-29127. Introduction
Noiseless feedback can increase the capacity region of the discrete memoryless MAC, unlikefor the single-user discrete memoryless channel. This was shown by Gaarder and Wolfin [5] for the binary erasure MAC, which is defined as Y = X + X . Ozarow showedin [2] that feedback can also increase the capacity region of a two-user Gaussian MAC-FB.A constructive achievability scheme based on the classical Kailath-Schalkwijk [6] feedbackscheme was shown to be optimal for the two-user Gaussian MAC-FB. Moreover, the cut-setouter bound was shown to be tight in this case.Subsequently, Cover and Leung obtained an achievable rate region for the general MAC-FB based on block Markov superposition coding [3]. Even though this region is in generallarger than the capacity region of the MAC without feedback, it is not optimal for thetwo-user Gaussian MAC-FB, as was shown in [2]. Kramer [7] used the notion of directedinformation to obtain an expression for the capacity region of the discrete memoryless MAC-FB. Unfortunately, this expression is in an incomputable non-single-letter form. Recently,Bross and Lapidoth [8] proposed an achievable rate region for the two-user discrete memo-ryless MAC-FB and showed that their region includes the Cover-Leung region, the inclusionbeing strict for some channels.For a specific class of MAC-FB, Willems [9] developed an outer bound that equals theCover-Leung achievable rate region. For this class of MAC-FB, each channel input (say X ) should be expressible as a deterministic function of the other channel input ( X ) andthe channel output ( Y ). The binary erasure MAC considered by Gaarder and Wolf, where Y = X + X , falls into this class of channels. Therefore, Cover-Leung region is the feedbackcapacity region for the binary erasure MAC.A general outer bound for MAC-FB is the cut-set bound. Although the cut-set bound wasshown to be tight for the two-user Gaussian MAC-FB, it is in general loose. An intuitivereason for the cut-set bound to be loose for the general MAC-FB is its permissibility ofarbitrary input distributions, some of which yielding rates which may not be achievable. Forinstance, even though Cover-Leung achievability scheme introduces correlation between X and X , it is a limited form of correlation, as the channel inputs are conditionally independentgiven an auxiliary random variable, whereas the cut-set bound allows all possible correlations.The idea of dependence balance was introduced by Hekstra and Willems in [1] to obtainan outer bound on the capacity region of the single-output two-way channel. The basic ideabehind this outer bound is to restrict the set of allowable input distributions, consequentlyrestricting arbitrary correlation between channel inputs. The authors also developed a par-allel channel extension for the dependence balance bound. The parallel channel extensioncan be interpreted as follows: the parallel channel output can be considered as a genie aidedinformation which is made available at both transmitters and the receiver and it also effectsthe set of allowable input distributions through the dependence balance bound. Dependingon the choice of the genie information (which is equivalent to choosing a parallel channel),2here is an inherent tradeoff between the set of allowable input distributions and the exces-sive mutual information rate terms which appear in the rate expressions as a consequenceof the parallel channel output. We will exploit this tradeoff provided by the parallel channelextension of the dependence balance bound to obtain a strict improvement over the cut-setbound for a particular MAC whose feedback capacity is not known.To motivate the choice of our MAC, consider the binary erasure MAC used by Gaarderand Wolf given by Y = X + X . If we introduce binary additive noise at the channel output,then the channel becomes Y = X + X + N , where all X , X and N are binary and N has a uniform distribution. This is a non-deterministic noisy MAC which does not fall intoany class of channels for which the feedback capacity is known. We should mention that thisparticular MAC was extensively studied by Kramer in [7, 10], where the first improvementover the Cover-Leung achievable rate region was obtained.We extend the idea of dependence balance to obtain an outer bound for the entire capacityregion of this binary additive noisy MAC-FB. Direct evaluation of the parallel channel baseddependence balance bound is intractable due to an involved auxiliary random variable whoselarge cardinality prohibits an exhaustive search. We use composite functions and theirproperties to obtain a simple characterization for our bound. Our outer bound strictlyimproves upon the cut-set bound at all points on the boundary where feedback increasescapacity. In addition, we explicitly evaluate the Cover-Leung achievable rate region for ourbinary additive noisy MAC-FB.We particularly focus on the symmetric-rate point on the feedback capacity region ofthis channel. Cover-Leung’s achievable symmetric-rate for this channel was obtained in [10]as 0 . . . . By symmetric-rate point, we refer to the maximum rate R such that the rate pair ( R, R ) lies in thecapacity region of MAC-FB.
3n [12] that a binary and uniform auxiliary random variable T is sufficient to attain thesum-rate point on the capacity region of the binary erasure MAC-FB. We show here thatthis is also the case for any asymmetric rate point on the boundary of the feedback capacityregion. This result also complements the work of Kramer [4], where feedback strategies weredeveloped for the binary erasure MAC-FB and it was shown that these strategies achieveall rates yielded by a binary selection of the auxiliary random variable T in the capacityregion. Our result hence shows in effect that the feedback strategies developed in [4] forbinary erasure MAC are optimal and capacity achieving. A discrete memoryless two-user MAC-FB (see Figure 1) is defined by the following: twoinput alphabets X and X , an output alphabet Y , and the channel defined by a probabilitytransition function p ( y | x , x ) for all ( x , x , y ) ∈ X × X × Y . A ( n, M , M , P e ) code for theMAC-FB consists of two sets of encoding functions f i , f i for i = 1 , . . . , n and a decodingfunction g f i : M × Y i − → X , i = 1 , . . . , nf i : M × Y i − → X , i = 1 , . . . , ng : Y n → M × M The two transmitters produce independent and uniformly distributed messages W ∈ { , . . . ,M } and W ∈ { , . . . , M } , respectively, and transmit them through n channel uses. Theaverage error probability is defined as P e = P r ( g ( Y n ) = ( W , W )). A rate pair ( R , R ) issaid to be achievable for MAC-FB if for any ǫ ≥
0, there exists a pair of n encoding functions { f i } ni =1 , { f i } ni =1 , and a decoding function g such that R ≤ log( M ) /n , R ≤ log( M ) /n and P e ≤ ǫ for sufficiently large n . The capacity region of MAC-FB is the closure of the setof all achievable rate pairs ( R , R ). By applying Theorem 14.10.1 in [13], the cut-set outer bound on the capacity region ofMAC-FB can be obtained as: CS = n ( R , R ) : R ≤ I ( X ; Y | X ) (1) R ≤ I ( X ; Y | X ) (2) R + R ≤ I ( X , X ; Y ) o (3)4 ncoder 1 W W Encoder 2 X X Yp ( y | x , x ) Decoder ˆ W ˆ W Figure 1: The multiple access channel with noiseless feedback (MAC-FB).where the random variables ( X , X , Y ) have the joint distribution p ( x , x , y ) = p ( x , x ) p ( y | x , x ) (4)The cut-set outer bound allows all input distributions p ( x , x ), which makes it seeminglyloose since an achievable scheme might not achieve arbitrary correlation and rates given bythe cut-set bound. Our aim is to restrict the set of allowable input distributions by using adependence balance approach. Hekstra and Willems [1] showed that the capacity region of MAC-FB is contained within DB , where DB = n ( R , R ) : R ≤ I ( X ; Y | X , T ) (5) R ≤ I ( X ; Y | X , T ) (6) R + R ≤ I ( X , X ; Y ) o (7)where the random variables ( X , X , Y, T ) have the joint distribution p ( t, x , x , y ) = p ( t ) p ( x , x | t ) p ( y | x , x ) (8)and also satisfy the following dependence balance bound I ( X ; X | T ) ≤ I ( X ; X | Y, T ) (9)where T is subject to a cardinality constraint of |T | ≤ |X ||X | + 2. The dependence balancebound restricts the set of input distributions in the sense that it allows only those inputdistributions p ( t, x , x ) which satisfy (9). It should be noted that by ignoring the constraintin (9), one obtains the cut-set bound. 5 Adaptive Parallel Channel Extension of the Depen-dence Balance Bound
In [1], Hekstra and Willems also developed an adaptive parallel channel extension for thedependence balance bound which is given as follows: Let ∆( U ) denote the set of all distri-butions of U and ∆( U |V ) denote the set of all conditional distributions of U given V . Thenfor any mapping F : ∆( X × X ) → ∆( Z|X × X × Y ), the capacity region of the MAC-FBis contained in DB P C = n ( R , R ) : R ≤ I ( X ; Y, Z | X , T ) (10) R ≤ I ( X ; Y, Z | X , T ) (11) R ≤ I ( X ; Y | X ) (12) R ≤ I ( X ; Y | X ) (13) R + R ≤ I ( X , X ; Y ) (14) R + R ≤ I ( X , X ; Y, Z | T ) o (15)where the random variables ( X , X , Y, Z, T ) have the joint distribution p ( t, x , x , y, z ) = p ( t ) p ( x , x | t ) p ( y | x x ) p + ( z | x , x , y, t ) (16)such that for all t p + ( z | x , x , y, t ) = F ( p X X ( x , x | t )) (17)and such that I ( X ; X | T ) ≤ I ( X ; X | Y, Z, T ) (18)where T is subject to a cardinality bound of |T | ≤ |X ||X | + 3.We should remark that the parallel channel (defined by p + ( z | x , x , y, t )) is selected apri-ori, and for every choice of the parallel channel, one obtains an outer bound on the capacityregion of MAC-FB, which is in general tighter than the cut-set bound. The set of allowableinput distributions p ( t, x , x ) are those which satisfy the constraint in (18). Also note thatonly the right hand side of (18), i.e., only I ( X ; X | Y, Z, T ), depends on the choice of theparallel channel. By carefully selecting p + ( z | x , x , y, t ), one can reduce I ( X ; X | Y, Z, T ),thereby making the constraint in (18) more stringent, consequently reducing the set of al-lowable input distributions. To obtain an improvement over the cut-set bound, we needto select a “good” parallel channel such that it restricts the input distributions to a smallallowable set and yields small values of I ( X ; Z | Y, X , T ) and I ( X ; Z | Y, X , T ) at the same6ime. These two mutual information “leak” terms are the extra terms that appear in (10)and (11) relative to the rates appearing in (5) and (6), respectively.To motivate the choice of our particular parallel channel, first consider a trivial choice of Z : Z = φ (a constant). For this choice of Z , (18) reduces to (9) and we are not restricting theset of allowable input distributions any more than the DB bound. Moreover, for a constantselection of Z , (10) and (11) reduce to (5) and (6), respectively. Thus, a constant selectionof Z for DB P C is equivalent to DB itself.Also note that the smallest value of I ( X ; X | Y, Z, T ) is zero. Thus, it follows that ifwe select a parallel channel such that I ( X ; X | Y, Z, T ) = 0 for every input distribution p ( t, x , x ), then I ( X ; X | T ) = 0 by (18). Hence, the smallest set of input distributionspermissable by DB P C consists of those p ( t, x , x ) for which X and X are conditionallyindependent given T . Furthermore, for a parallel channel such that I ( X ; X | Y, Z, T ) = 0,the bound in (15) is redundant. This can be seen from:0 = I ( X ; X | T ) − I ( X ; X | Y, Z, T )= I ( X ; Y, Z | T ) − I ( X ; Y, Z | X , T )= I ( X , X ; Y, Z | T ) − I ( X ; Y, Z | X , T ) − I ( X ; Y, Z | X , T ) (19)Using (19), it is clear that the sum of constraints (10) and (11) is at least as strong as theconstraint (15). This shows that (15) is redundant for the class of parallel channels where I ( X ; X | Y, Z, T ) = 0.
In this paper, we will consider a binary-input additive noisy MAC given by Y = X + X + N (20)where N is binary, uniform over { , } and is independent of X and X . The channeloutput Y takes values from the set Y = { , , , } . This channel does not fall into any classof MAC for which the feedback capacity region is known. This channel was also consideredby Kramer in [7, 10] where it was shown that the Cover-Leung achievable rate is strictlysub-optimal for the sum-rate.We select a parallel channel p + ( z | x , x , y ) such that I ( X ; X | Y, Z, T ) = 0. By (18),this will imply I ( X ; X | T ) = 0, and hence only distributions of the type p ( t, x , x ) = p ( t ) p ( x | t ) p ( x | t ) will be allowed. By doing so, we restrict the set of allowable input dis-tributions to be the smallest permitted by DB P C , although we pay a penalty due to thepositive “leak” terms I ( X ; Z | Y, X , T ) and I ( X ; Z | Y, X , T ).Two simple choices of Z which yield I ( X ; X | Y, Z, T ) = 0 are Z = X and Z = X . For7ach of these choices, the corresponding outer bounds are, DB (1) P C = n ( R , R ) : R ≤ I ( X ; Y | X , T ) + H ( X | Y, X , T ) (21) R ≤ I ( X ; Y | X , T ) (22) R ≤ I ( X ; Y | X ) (23) R + R ≤ I ( X , X ; Y ) o (24)and DB (2) P C = n ( R , R ) : R ≤ I ( X ; Y | X , T ) (25) R ≤ I ( X ; Y | X , T ) + H ( X | Y, X , T ) (26) R ≤ I ( X ; Y | X ) (27) R + R ≤ I ( X , X ; Y ) o (28)where both DB (1) P C and DB (2) P C are evaluated over the set of input distributions of the form p ( t, x , x ) = p ( t ) p ( x | t ) p ( x | t ).For the binary additive noisy MAC-FB in consideration which is given in (20), the fol-lowing equalities hold for any distribution of the form p ( t, x , x ) = p ( t ) p ( x | t ) p ( x | t ), H ( X | Y, X , T ) = 12 H ( X | T ) (29) H ( X | Y, X , T ) = 12 H ( X | T ) (30)Using (29) and (30), we can simplify DB (1) P C and DB (2) P C as, DB (1) P C = n ( R , R ) : R ≤ min ( I ( X ; Y | X ) , H ( X | T )) (31) R ≤ H ( X | T ) (32) R + R ≤ I ( X , X ; Y ) o (33)and DB (2) P C = n ( R , R ) : R ≤ H ( X | T ) (34) R ≤ min ( I ( X ; Y | X ) , H ( X | T )) (35) R + R ≤ I ( X , X ; Y ) o (36)where both bounds are evaluated over the set of distributions of the form p ( t, x , x ) = p ( t ) p ( x | t ) p ( x | t ) and the auxiliary random variable T is subject to a cardinality constraint8f |T | ≤ |X ||X | + 3. The evaluation of the above outer bounds is rather cumbersomebecause for binary inputs, the bound on |T | is |T | ≤
7. To the best of our knowledge, noone has been able to conduct an exhaustive search over an auxiliary random variable whosecardinality is larger than 4. In Section 8, we will obtain an alternate characterization for ourouter bounds using composite functions and their properties. For that, we will first developsome useful properties of composite functions in the next section.A valid outer bound is given by the intersection of DB (1) P C and DB (2) P C , DB P C = DB (1) P C \ DB (2) P C (37)We will show that this outer bound is strictly smaller than the cut-set bound at all pointson the capacity region where feedback increases capacity.
Before obtaining a characterization of our outer bounds, we will define a composite functionand prove two lemmas regarding its properties. These lemmas will be essential in obtainingsimple characterizations for our outer bounds and the Cover-Leung achievable rate region.Throughout the paper, we will refer to the entropy function as h ( k ) ( s , . . . s k ) which is definedas, h ( k ) ( s , . . . , s k ) = − k X i =1 s i log( s i ) (38)for s i ≥ i = 1 . . . , k , and P ki =1 s i = 1, where all logarithms are to the base 2. We willdenote h (2) ( s ) simply as h ( s ). To characterize our bounds, we will make use of the followingfunction φ ( s ) = ( −√ − s , for 0 ≤ s ≤ / −√ s − , for 1 / < s ≤ h ( φ ( s )) is symmetric around s = 1 / s for 0 ≤ s ≤
1. The functions φ ( s ) and h ( φ ( s )) are illustrated in Figure 2. Fromthe definition of φ ( s ) in (39) it is clear that for any s ∈ [0 , φ ( s ) satisfies thefollowing property φ (2 s (1 − s )) = min( s, − s ) (40)As a consequence, the following holds as well h ( φ (2 s (1 − s ))) = h ( s ) (41)9 φ (s)h( φ (s)) Figure 2: Functions φ ( s ) and h ( φ ( s )).For any s ∈ [0 , φ ( s ), s = ( φ (2 s (1 − s )) , ≤ s ≤ − φ (2 s (1 − s )) , < s ≤ x ∈ [0 , ] and y ∈ [0 , ], let us define a function f ( x, y ) , φ ( x ) + φ ( y ) − φ ( x ) φ ( y ) (43)= 1 − p (1 − x )(1 − y )2 (44)From the above definition, it is clear that the function f ( x, y ) lies in the range [0 , ]. Lemma 1
The variable v = s + s − s s (45) is always lower bounded by f (2 s (1 − s ) , s (1 − s )) for any s ∈ [0 , , s ∈ [0 , . Proof:
We will prove this lemma by considering all four possible cases.1. If s ∈ [0 , ] , s ∈ [0 , ], then from (42), s = φ (2 s (1 − s )), s = φ (2 s (1 − s )) andhence v = f (2 s (1 − s ) , s (1 − s )) (46)2. If s ∈ [ , , s ∈ [ , s = 1 − φ (2 s (1 − s )), s = 1 − φ (2 s (1 − s ))10nd hence v = f (2 s (1 − s ) , s (1 − s )) (47)3. If s ∈ [0 , ] , s ∈ [ , s = φ (2 s (1 − s )), s = 1 − φ (2 s (1 − s ))and hence v = 1 − f (2 s (1 − s ) , s (1 − s )) ( a ) ≥ f (2 s (1 − s ) , s (1 − s )) (48)where ( a ) follows by the fact that f (2 s (1 − s ) , s (1 − s )) ≤ .4. If s ∈ [ , , s ∈ [0 , ], then from (42), s = 1 − φ (2 s (1 − s )), s = φ (2 s (1 − s ))and hence v = 1 − f (2 s (1 − s ) , s (1 − s )) ( b ) ≥ f (2 s (1 − s ) , s (1 − s )) (49)where ( b ) follows by the fact that f (2 s (1 − s ) , s (1 − s )) ≤ .Thus, for any pair ( s , s ), where s ∈ [0 , s ∈ [0 , v ≥ f (2 s (1 − s ) , s (1 − s )). (cid:4) Lemma 2
The function f ( x, y ) is jointly convex in ( x, y ) for ≤ x ≤ , ≤ y ≤ . Proof:
Showing that the function f ( x, y ) is jointly convex in ( x, y ) is equivalent to showingthat the Hessian matrix, H of f ( x, y ) is positive semi-definite, which is equivalent to showingthat the eigenvalues of H are non-negative. The Hessian matrix, H , of f ( x, y ) is H = √ − y − x ) / − √ (1 − x )(1 − y ) − √ (1 − x )(1 − y ) √ − x − y ) / (50)The two eigenvalues of H are λ = 0 λ = 12 √ − y (1 − x ) / + √ − x (1 − y ) / ! (51)which are non-negative for all 0 ≤ x ≤ and 0 ≤ y ≤ , thus completing the proof. (cid:4) Evaluation of the Dependence Balance Outer Bound
We will now return to the characterization of our upper bounds DB (1) P C and DB (2) P C . Let thecardinality of the auxiliary random variable T be fixed and arbitrary, say |T | . Then, thejoint distribution p ( t ) p ( x | t ) p ( x | t ) can be described by the following variables: q t = Pr( X = 0 | T = t ) , t = 1 , . . . , |T | q t = Pr( X = 0 | T = t ) , t = 1 , . . . , |T | p t = Pr( T = t ) , t = 1 , . . . , |T | (52)We will characterize our outer bounds in terms of three variables u , u and u which arefunctions of p ( t, x , x ), and are defined as, u = X t p t q t (1 − q t ) = X t p t u t (53) u = X t p t q t (1 − q t ) = X t p t u t (54) u = X t p t ( q t + q t − q t q t ) = X t p t u t (55)where we have defined u t = q t (1 − q t ) (56) u t = q t (1 − q t ) (57) u t = q t + q t − q t q t (58)It should be noted that since 0 ≤ q jt ≤
1, for j = 1 , t = 1 , . . . , |T | , the variables u , u , u t and u t all lie in the range [0 , ]. Our outer bounds DB (1) P C and DB (2) P C are comprised of thefollowing information theoretic entities:1. H ( X | T ), H ( X | T )2. I ( X ; Y | X ), I ( X ; Y | X )3. I ( X , X ; Y ).We will first obtain upper bounds for each one of these entities individually in terms of( u , u , u ). 12e upper bound H ( X | T ) as follows, H ( X | T ) = X t p t h ( q t ) (59)= X t p t h ( φ (2 q t (1 − q t ))) (60)= X t p t h ( φ (2 u t )) (61) ≤ h ( φ (2 u )) (62)where (60) follows due to (41), (61) follows from (56), and (62) follows from the fact that h ( φ ( s )) is concave in s and the application of Jensen’s inequality [13]. Using a similar set ofinequalities for H ( X | T ), we obtain H ( X | T ) ≤ h ( φ (2 u )) (63)We will now upper bound I ( X ; Y | X ) in terms of the variable u . For this purpose, letus first define a = P X X (0 ,
0) = X t p t q t q t (64) b = P X X (0 ,
1) = X t p t q t (1 − q t ) (65) c = P X X (1 ,
0) = X t p t (1 − q t ) q t (66) d = P X X (1 ,
1) = 1 − a − b − c. (67)We now proceed as, I ( X ; Y | X ) = H ( Y | X ) − H ( Y | X , X ) (68)= H ( Y | X ) − a + c ) h (3) (cid:18) a a + c ) , , c a + c ) (cid:19) + ( b + d ) h (3) (cid:18) b b + d ) , , d b + d ) (cid:19) − ≤ h (3) (cid:18) a + d , , b + c (cid:19) − h ( b + c ) (72)= 12 h ( u ) (73)13here (71) follows by the concavity of the entropy function and the application of Jensen’sinequality [13]. Using a similar set of inequalities, we also have I ( X ; Y | X ) ≤ h ( u ) (74)We will now obtain an upper bound on I ( X , X ; Y ). First note that I ( X , X ; Y ) = H ( Y ) − H ( Y | X , X ) (75)= h (4) ( P Y (0) , P Y (1) , P Y (2) , P Y (3)) − P Y (0) = X t p t q t q t / P Y (1) = X t p t (cid:0) q t + q t − q t q t (cid:1) / P Y (2) = X t p t (cid:0) − q t q t (cid:1) / P Y (3) = X t p t (1 − q t )(1 − q t ) / h (4) ( α, β, γ, θ ) = 12 h (4) ( α, β, γ, θ ) + 12 h (4) ( θ, γ, β, α ) (81) ≤ h (4) (cid:18) α + θ , β + γ , β + γ , α + θ (cid:19) (82)= h ( α + θ ) + h (cid:18) (cid:19) (83)= h (1 − ( β + γ )) + 1 (84)where (82) follows by the concavity of the entropy function and the application of Jensen’sinequality [13], we now obtain an upper bound on I ( X , X ; Y ) by continuing from (76), I ( X , X ; Y ) = h (4) ( P Y (0) , P Y (1) , P Y (2) , P Y (3)) − ≤ h (1 − ( P Y (1) + P Y (2))) + h (cid:18) (cid:19) − h (cid:18) − u (cid:19) (87)where (86) follows by (84) and (87) follows from the fact that P Y (1) + P Y (2) = (1 + u ) / u is as defined in (55).14 .1 A Set of Feasible ( u , u , u ) : P We have obtained upper bounds on the information theoretic entities which comprise ourouter bounds in terms of three variables u , u and u . We will now give a feasible region forthese triples based on the structures of these variables. First, note that for any q t ∈ [0 , u t = q t (1 − q t ) ≤ . Similarly, u t = q t (1 − q t ) ≤ . Hence, we have0 ≤ u ≤
14 (88)0 ≤ u ≤
14 (89)We now obtain a lower bound on u as u = X t p t u t (90) ≥ X t p t f (2 u t , u t ) (91) ≥ f X t p t u t , X t p t u t ! (92)= f (2 u , u ) (93)where (91) follows by Lemma 1 and (92) follows by Lemma 2 and the application of Jensen’sinequality [13]. We now obtain another lower bound on u , u = X t p t u t (94)= X t p t ( q t + q t − q t q t ) (95)= X t p t ( q t − q t + q t − q t + ( q t − q t ) ) (96) ≥ X t p t ( q t − q t + q t − q t ) (97)= X t p t q t (1 − q t ) + X t p t q t (1 − q t ) (98)= u + u (99)15inally, we obtain an upper bound on u in terms of u and u , u = X t p t u t (100)= X t p t ( q t + q t − q t q t ) (101)= X t p t ( q t + q t − q t q t + q t + (1 − q t ) − q t − (1 − q t ) ) (102) ≤ X t p t ( q t + q t − q t q t + q t + (1 − q t ) − q t (1 − q t )) (103)= 1 − ( u + u ) (104)where (103) follows by the inequality q t + (1 − q t ) ≥ q t (1 − q t ).By noting f (2 u , u ) − ( u + u ) = 1 − p (1 − u )(1 − u )2 − ( u + u ) (105)= (1 − u ) + (1 − u ) − p (1 − u )(1 − u )4 (106)= ( √ − u − √ − u ) ≥ u in terms of u and u , f (2 u , u ) ≤ u ≤ − ( u + u ) (109)Combining (88), (89) and (109), a set of feasible ( u , u , u ) is given as follows, P = n ( u , u , u ) : 0 ≤ u ≤
14 ; 0 ≤ u ≤
14 ; f (2 u , u ) ≤ u ≤ − ( u + u ) o (110)It should be noted that the set P in (110) may not necessarily be the smallest feasible setof all triples ( u , u , u ). Since we are interested in a maximization over these set of triples,a possibly larger set P suffices. DB (1) P C and DB (2) P C
Using the upper bounds on H ( X | T ), H ( X | T ), I ( X ; Y | X ), I ( X ; Y | X ) and I ( X , X ; Y )in (62), (63), (73), (74) and (87) in terms of ( u , u , u ) along with a feasible set of triples P in (110), we obtain the following two outer bounds on the capacity region of the binary16dditive noisy MAC-FB, starting from (31)-(33) and (34)-(36), DB (1) P C = [ ( u ,u ,u ) ∈P ( ( R , R ) : R ≤ min (cid:18) h ( u ) , h ( φ (2 u )) (cid:19) R ≤ h ( φ (2 u )) R + R ≤ h (cid:18) − u (cid:19) ) (111)and DB (2) P C = [ ( u ,u ,u ) ∈P ( ( R , R ) : R ≤ h ( φ (2 u )) R ≤ min (cid:18) h ( u ) , h ( φ (2 u )) (cid:19) R + R ≤ h (cid:18) − u (cid:19) ) (112)We will plot these outer bounds and their intersection in Figure 4. In next section, we willexplicitly characterize our upper bounds for the symmetric-rate point on the capacity regionof the binary additive noisy MAC-FB in consideration. For the binary additive noisy MAC-FB in consideration, it was shown by Kramer [7] that thesymmetric-rate cut-set bound is 0 . . . p ( t ) p ( x | t ) p ( x | t )which achieves it. By symmetric-rate we mean a rate R such that the rate pair ( R, R ) lies inthe capacity region of MAC-FB. For the symmetric-rate, both DB (1) P C and DB (2) P C will yieldthe same upper bound. Hence, we will focus on DB (1) P C . Using (111), we are interested in17btaining the largest R over all ( u , u , u ) ∈ P such that R ≤ min (cid:18) h ( u ) , h ( φ (2 u )) (cid:19) (113) R ≤ h ( φ (2 u )) (114)2 R ≤ h (cid:18) − u (cid:19) (115)We will show that a seemingly weaker version of the above bound will improve upon thesymmetric-rate cut-set bound. We will also show that the weaker bound is in fact the sameas the above bound, and its sole purpose is the simplicity of evaluation and insight into theinput distribution that attains it. We first obtain a weakened version of (113) as R ≤ min (cid:18) h ( u ) , h ( φ (2 u )) (cid:19) ≤ h ( φ (2 u )) (116)Next, consider (115) 2 R ≤ h (cid:18) − u (cid:19) (117)= h (cid:18) − u (cid:19) (118) ≤ h (cid:18) − f (2 u , u )2 (cid:19) (119)where (119) follows from (93) and the fact that the binary entropy function h ( s ) is mono-tonically increasing in s for s ∈ [0 , ]. Combining (114), (116) and (119), we are interestedin the largest R such that R ≤ max u ,u ∈ [0 , ] min (cid:18) h ( φ (2 u )) , h ( φ (2 u )) , h (cid:18) − f (2 u , u )2 (cid:19)(cid:19) (120)We note that this upper bound on the symmetric-rate depends only on u and u , andtherefore, we replace the feasible set P with u , u ∈ [0 , ].We know that h ( φ ( s )) is concave in s for s ∈ [0 , h ( φ (2 u ))and h ( φ (2 u )) are concave in u and u , respectively, and hence concave in the pair ( u , u ).We also have the following lemma. Lemma 3
The function g ( u , u ) = 12 h (cid:18) − f (2 u , u )2 (cid:19) (121) is monotonically decreasing and jointly concave in the pair ( u , u ) for u , u ∈ [0 , ] . roof: It suffices to show that for a fixed u , the function g ( u , u ) is monotonically decreas-ing in u . Substituting the value of f (2 u , u ), we have g ( u , u ) = 12 h (cid:18) − ( φ (2 u ) + φ (2 u ) − φ (2 u ) φ (2 u ))2 (cid:19) (122)= 12 h (cid:18) − φ (2 u )2 − φ (2 u )(1 − φ (2 u ))2 (cid:19) (123)Now using the fact that φ (2 s ) is increasing in s for s ∈ [0 , ], we have that for u ′ ≥ u , φ (2 u ′ ) ≥ φ (2 u ). Moreover, the following holds φ (2 u ′ )(1 − φ (2 u ))2 ≥ φ (2 u )(1 − φ (2 u ))2 (124)since φ (2 u ) ≤ . Now using the above inequality along with the fact that the binary entropyfunction h ( s ) is increasing for 0 ≤ s ≤ , we have that for u ′ ≥ u ,12 h (cid:18) − φ (2 u )2 − φ (2 u )(1 − φ (2 u ))2 (cid:19) ≥ h (cid:18) − φ (2 u )2 − φ (2 u ′ )(1 − φ (2 u ))2 (cid:19) (125)This shows that for a fixed u , the function g ( u , u ) is monotonically decreasing in u . Asthe function is symmetric in u and u , the monotonicity of g ( u , u ) in ( u , u ) follows.To show the concavity of g ( u , u ) in the pair ( u , u ), we first note from Lemma 2 that f (2 u , u ) is jointly convex in the pair ( u , u ). We define another function ξ ( u , u ) = 1 − f (2 u , u )2 (126)Note that ξ ( u , u ) is jointly concave in the pair ( u , u ). Furthermore, the binary entropyfunction h ( s ) is concave and nondecreasing for s ∈ [0 , ]. Hence, rewriting the function g ( u , u ) as a composition of two functions, we obtain g ( u , u ) = 12 h ( ξ ( u , u )) (127)From the theory of composite functions [14], we know that a composite function f ( f ( s ))is concave in s if f ( . ) is concave and nondecreasing and f ( s ) is concave in s . Identifying f ( . ) with h ( . ) and f ( u , u ) with ξ ( u , u ), the concavity of g ( u , u ) in the pair ( u , u ) isestablished. (cid:4) Therefore, all three functions in the min( . ) in (120) are concave in ( u , u ). Invoking thefact that the minimum of concave functions is concave, we conclude that the maximum in(120) is unique. We will now show that the unique pair ( u ∗ , u ∗ ) that attains this maximumsatisfies the property that h ( φ (2 u ∗ )) = h ( φ (2 u ∗ )) = g ( u ∗ , u ∗ ).19or this purpose, we first characterize those pairs (˜ u , ˜ u ) such that the following holds, h ( φ (2˜ u )) = 12 h ( φ (2˜ u )) = g (˜ u , ˜ u ) (128)By using (128), we obtain two equations for ˜ u and ˜ u , as h ( φ (2˜ u )) = 12 h (cid:18) − φ (2˜ u )3 − φ (2˜ u ) (cid:19) (129) φ (2˜ u ) = 1 − φ (2˜ u )3 − φ (2˜ u ) (130)From (129), one can see that 2˜ u is the unique solution s ∈ [0 , ] of the equation h ( φ ( s )) = 12 h (cid:18) − φ ( s )3 − φ ( s ) (cid:19) (131)Obtaining the optimal ˜ u from the above equation is illustrated in Figure 3. The uniquesolutions (˜ u , ˜ u ) of (129) and (130) are˜ u = 0 . , ˜ u = 0 . u , ˜ u ) yields the maximum in (120).Returning to the maximization problem (120), first denote S as the region of allowable( u , u ), S = n ( u , u ) : 0 ≤ u ≤
14 ; 0 ≤ u ≤ o (133)Also define a subset of this region˜ S = n ( u , u ) : u ∈ (˜ u ,
14 ]; u ∈ (˜ u ,
14 ] o (134)where (˜ u , ˜ u ) is given by (132). We will now show that the pair (˜ u , ˜ u ) yields the solutionof the maximization problem in (120). Consider the following two cases,1. If ( u , u ) ∈ ˜ S , then by Lemma 3, we have that g ( u , u ) ≤ g (˜ u , ˜ u ), using which weobtain, min (cid:18) h ( φ (2 u )) , h ( φ (2 u )) , g ( u , u ) (cid:19) ≤ g ( u , u ) ≤ g (˜ u , ˜ u ) (135)2. If ( u , u ) ∈ S \ ˜ S , we either have u ≤ ˜ u or u ≤ ˜ u or both. Using this along with20 s h((1− φ (s))/(3−2 φ (s)))/2h( φ (s)) Figure 3: Characterization of the optimal u ∗ .the fact that h ( φ (2 s )) is monotonically increasing in s for s ∈ [0 , ], we obtainmin (cid:18) h ( φ (2 u )) , h ( φ (2 u )) , g ( u , u ) (cid:19) ≤ h ( φ (2˜ u )) (136)The above two cases show the following,max u ∈ [0 , ] ,u ∈ [0 , ] min (cid:18) h ( φ (2 u )) , h ( φ (2 u )) , g ( u , u ) (cid:19) = h ( φ (2˜ u )) (137)= 12 h ( φ (2˜ u )) (138)= g (˜ u , ˜ u ) (139)Thus, the maximum in (120) is obtained at ( u ∗ , u ∗ ) = (˜ u , ˜ u ). We now obtain a distribution p ( t ) p ( x | t ) p ( x | t ) which attains this symmetric-rate upper bound. Fix T to be binary, andselect the involved probabilities as p = p = 12 (140) q = 1 − q = φ (2 u ∗ ) (141) q = 1 − q = φ (2 u ∗ ) (142)The reason for constructing such an input distribution is that, at this specific distribution,21e have the following exact equalities, H ( X | T ) = h ( φ (2 u ∗ )) (143)12 H ( X | T ) = 12 h ( φ (2 u ∗ )) (144)12 I ( X , X ; Y ) = g ( u ∗ , u ∗ ) (145)and we achieve the outer bound we developed with equality. Substituting the values of( u ∗ , u ∗ ), we obtain a distribution given by, p = p = 12 (146) q = 1 − q = 0 . q = 1 − q = 0 . . u ∗ corresponding to this distribution is given by u ∗ = X t p t ( q t + q t − q t q t ) (149)= f (2 u ∗ , u ∗ ) (150)= 0 . p ( t, x , x ) and (151) is obtained bysubstituting the distribution specified in (146)-(148). Moreover, φ (2 u ∗ ) < u ∗ < , hence wealso have that 12 h ( u ∗ ) ≥ h ( φ (2 u ∗ )) = h ( φ (2 u ∗ )) (152)This shows that the weakened version of the upper bound obtained in (120) is indeed tightand a binary auxiliary random variable T with uniform distribution over { , } is sufficientto attain this symmetric-rate upper bound.
10 Evaluation of the Cover-Leung Achievable Rate Re-gion
For completeness we will also obtain a simple characterization of the Cover-Leung innerbound for our binary additive noisy MAC-FB. For this purpose, we follow a two-step ap-proach. In the first step, we first obtain an outer bound on the achievable rate region interms of two variables ( u , u ). In the second step, we specify an input distribution, as a22unction of ( u , u ), which achieves the outer bound. We therefore arrive at an alternatecharacterization of the Cover-Leung achievable rate region in terms of the variables ( u , u ).The Cover-Leung achievable rate region [3] is given as, CL = n ( R , R ) : R ≤ I ( X ; Y | X , T ) (153) R ≤ I ( X ; Y | X , T ) (154) R + R ≤ I ( X , X ; Y ) o (155)where the random variables ( T, X , X , Y ) have the joint distribution, p ( t, x , x , y ) = p ( t ) p ( x | t ) p ( x | t ) p ( y | x , x ) (156)and the random variable T is subject to a cardinality constraint of |T | ≤ min( |X ||X | + 1 , |Y | + 2). For the binary, additive noisy MAC in consideration, the constraints in (153)-(155)become, R ≤ H ( X | T ) (157) R ≤ H ( X | T ) (158) R + R ≤ I ( X , X ; Y ) (159)We will first obtain an outer bound on the region specified by (157)-(159) in terms of twovariables ( u , u ). For every pair ( u , u ), we will then specify an input distribution whichwill attain this outer bound. Note that the three constraints (157)-(159) are of similar formas in the case of DB (1) P C and DB (2) P C , and we proceed in a similar manner to obtain upperbounds on the three terms above in terms of u and u as, R ≤ h ( φ (2 u )) (160) R ≤ h ( φ (2 u )) (161) R + R ≤ h (cid:18) − f (2 u , u )2 (cid:19) (162)where the variables ( u , u ) belong to the set S defined in (133). Hence, an outer bound on23he rate region specified by (157)-(159) is given as O , where O = [ ( u ,u ) ∈S ( ( R , R ) : R ≤ h ( φ (2 u )) R ≤ h ( φ (2 u )) R + R ≤ h (cid:18) − f (2 u , u )2 (cid:19) ) (163)Let ( u , u ) be any arbitrary pair which belongs to S . Consider an input distribution forwhich |T | = 2, and T is uniform over { , } and, p = p = 12 (164) q = 1 − q = φ (2 u ) (165) q = 1 − q = φ (2 u ) (166)For this input distribution, we obtain the following exact equalities H ( X | T ) = h ( φ (2 u )) (167) H ( X | T ) = h ( φ (2 u )) (168) I ( X , X ; Y ) = h (cid:18) − f (2 u , u )2 (cid:19) (169)We have thus shown that the outer bound we obtained on the achievable rate region in termsof ( u , u ) can be attained by a set of input distributions for which the involved auxiliaryrandom variable T is binary and uniform. This in turn implies that a binary and uniformrandom variable T is sufficient to characterize the entire Cover-Leung achievable rate regionfor the binary additive noisy MAC-FB. By varying over all such input distributions, orequivalently, by varying ( u , u ) in the set S , we obtain the entire Cover-Leung achievablerate region. We should remark here that when evaluating the DB P C bound in the previoussection for Z = X and Z = X , it was not necessary to specify the distribution whichachieves the bound, since it was an outer bound. On the other hand, when evaluating theCover-Leung bound, since it is an achievability, it is necessary to give a distribution whichachieves the bound.The dependence balance bounds corresponding to the parallel channel choices Z = X and Z = X , along with the cut-set upper bound and the Cover-Leung achievable rate regionare shown in Figure 4. It is interesting to note that our bound improves upon the cut-setbound at all points where the Cover-Leung achievable rate region is strictly larger than thecapacity region without feedback. In other words, our bound improves upon the cut-setbound at all points where feedback increases capacity.24e should remark that our choices of parallel channels; namely, Z = X and Z = X arethe simplest ones which ensure that I ( X ; X | Y, Z, T ) = 0 but they yield fixed informationleaks. We believe that by a more elaborate choice of a parallel channel, i.e., by carefullyselecting a parameterized parallel channel p + ( z | x , x , y, t ) such that I ( X ; X | Y, Z, T ) = 0,one would still be able to restrict the input distributions to a conditionally independent formand then optimize the parameters of the parallel channel to minimize the information leakterms. This approach can potentially improve upon our outer bound.
11 The Capacity Region of the Binary Erasure MAC-FB
The capacity region of a class of discrete memoryless MAC-FB was characterized in [9] byestablishing a converse and it was shown to be equal to the Cover-Leung achievable rateregion. This class of channels satisfy the property that at least one of the channel inputs say X , can be written as a deterministic function of the other channel input X and the channeloutput Y . The binary erasure MAC, where Y = X + X , falls into this class of channels.In addition, the binary erasure MAC-FB is the noiseless version of the binary additive noisyMAC-FB studied in this paper.Willems showed in [12] that a binary selection of auxiliary random variable is sufficientto obtain the sum-rate point of the capacity region of the binary erasure MAC-FB. In thissection, we will show that by using our results for composite functions which were presentedin previous sections, it is possible to obtain all points on the boundary of this capacity regionusing a binary auxiliary random variable. The feedback capacity region of this channel isgiven by the Cover-Leung achievable rate region given in (153)-(155) which can be simplifiedfor the binary erasure MAC-FB as, R ≤ H ( X | T ) (170) R ≤ H ( X | T ) (171) R + R ≤ H ( Y ) (172)We obtain three upper bounds on the expressions appearing in the bounds (170)-(172). From(62), we have, H ( X | T ) ≤ h ( φ (2 u )) (173)Similarly, we also have H ( X | T ) ≤ h ( φ (2 u )) (174)25e now obtain an upper bound on H ( Y ), by first noting that, H ( Y ) = h (3) ( P Y (0) , P Y (1) , P Y (2)) (175)where P Y (0) = X t p t q t q t (176) P Y (1) = X t p t ( q t + q t − q t q t ) (177) P Y (2) = X t p t (1 − q t )(1 − q t ) (178)Now, we use the following inequality established in [12], h (3) ( a, b, c ) = 12 h (3) ( a, b, c ) + 12 h (3) ( c, b, a ) (179) ≤ h (3) (cid:18) a + c , b, a + c (cid:19) (180)= h ( b ) + 1 − b (181)where (180) follows by the concavity of the entropy function and by the application ofJensen’s inequality [13]. Using (181) and continuing from (175), we obtain H ( Y ) = h (3) ( P Y (0) , P Y (1) , P Y (2)) (182) ≤ h ( P Y (1)) + 1 − P Y (1) (183)= h ( u ) + 1 − u (184)where u is defined in (55). Using (173), (174) and (184), we can write an outer bound O on the capacity region as follows, O = [ ( u ,u ,u ) ∈P O ( u , u , u ) (185)where O ( u , u , u ) = n ( R , R ) : R ≤ h ( φ (2 u )) R ≤ h ( φ (2 u )) R + R ≤ h ( u ) + 1 − u o (186)and the set P is defined in (110). We will now obtain a simpler characterization of O in26erms of two variables ( u , u ) by showing that O ≡ O , where, O = [ ( u ,u ) ∈S O ( u , u ) (187)where O ( u , u ) = n ( R , R ) : R ≤ h ( φ (2 u )) R ≤ h ( φ (2 u )) R + R ≤ h ( f (2 u , u )) + 1 − f (2 u , u ) o (188)The inclusion O ⊆ O is straightforward by forcing u = f (2 u , u ) in O . We will nowshow that O ⊆ O . For this purpose, we will need the following lemma. Lemma 4
The function µ ( s ) = h ( s ) + 1 − s (189) is concave in s for s ∈ [0 , and takes its maximum value at s = . Moreover, the function µ ( s ) is increasing in s for s ∈ [0 , ] and decreasing in s for s ∈ [ , . The proof of this lemma follows from the fact that both h ( s ) and − s are concave in s .Now consider any arbitrary triple ( u , u , u ) ∈ P . We can classify any such triple intoone of the following cases:1. If f (2 u , u ) ≤ u ≤ : for any such ( u , u , u ), there exists a pair (¯ u , ¯ u ), such that u ≤ ¯ u ≤
14 (190) u ≤ ¯ u ≤
14 (191) u = f (2¯ u , u ) (192)One such pair (¯ u , ¯ u ) can be obtained as follows. Using the fact that for a fixed u , f (2 u , u ) is increasing in u , we select ¯ u = u and solve for u ≤ ¯ u ≤ for which f (2¯ u , u ) = u . The required ¯ u is obtained as,¯ u = 14 (cid:18) − (1 − u ) (1 − u ) (cid:19) (193)27or such a pair (¯ u , ¯ u ), the following inequalities hold, h ( φ (2 u )) = h ( φ (2¯ u )) (194) h ( φ (2 u )) ≤ h ( φ (2¯ u )) (195) h ( u ) + 1 − u = h ( f (2¯ u , u )) + 1 − f (2¯ u , u ) (196)2. If f (2 u , u ) ≤ ≤ u ≤ − ( u + u ), then we have by Lemma 4, h ( u ) + 1 − u ≤ h (cid:18) (cid:19) + 1 −
12 (197)= 32 (198)Now consider the pair (¯ u , ¯ u ) = ( , ), for which we have f (2¯ u , u ) = . Hence wehave that, h ( φ (2 u )) ≤ h ( φ (2¯ u )) = 1 (199) h ( φ (2 u )) ≤ h ( φ (2¯ u )) = 1 (200) h ( u ) + 1 − u ≤ h ( f (2¯ u , u )) + 1 − f (2¯ u , u ) = 32 (201)We have thus shown that for any triple ( u , u , u ), there exists a pair (¯ u , ¯ u ), such that O ( u , u , u ) ⊆ O (¯ u , ¯ u ), which in turn implies that O ⊆ O , and consequently O ≡ O .Hence, we have an outer bound on the capacity region as given by O .The outer bound O is evaluated over the set of pairs ( u , u ) such that u , u ∈ [0 , ].For any such arbitrary pair ( u , u ), an input distribution which achieves the set of rate pairsspecified by O ( u , u ) is obtained by selecting |T | = 2, and p = p = 12 (202) q = 1 − q = φ (2 u ) (203) q = 1 − q = φ (2 u ) (204)The set of rates achievable by the distribution specified in (202)-(204) are obtained as, R ≤ H ( X | T ) = h ( φ (2 u )) (205) R ≤ H ( X | T ) = h ( φ (2 u )) (206) R + R ≤ H ( Y ) = h ( f (2 u , u )) + 1 − f (2 u , u ) (207)This shows that the capacity region of binary erasure MAC-FB can be obtained by abinary and uniform selection of the auxiliary random variable T . The capacity region of28he binary erasure MAC with and without feedback and the cut-set bound are illustratedin Figure 5. It was shown in [12] that the sum-rate point on the boundary of the capacityregion lies strictly below the “total cooperation” line. This is equivalent to saying that thecut-set bound is not tight for the sum-rate point. From our result, it is now clear that thecut-set bound is not tight for asymmetric rate pairs either. In fact, it is not tight at allboundary points where feedback increases capacity.Moreover, our result also shows that a simple selection of binary and uniform T is suf-ficient to evaluate the boundary of the capacity region of binary erasure MAC-FB. Simplefeedback strategies for a class of two user MAC-FB were developed in [4]. It was shownthat for the binary erasure MAC, these feedback strategies yield all rate points for a binaryselection of the auxiliary random variable T . Thus, our result shows that these feedbackstrategies are indeed optimal for the binary erasure MAC-FB and yield all rates on theboundary of its feedback capacity region.
12 Conclusions
In this paper, we obtained a new outer bound on the capacity region of a MAC-FB by usingthe idea of dependence balance. We considered a binary additive noisy MAC-FB for whichit is known that feedback increases capacity but the feedback capacity region is not known.The best known outer bound on the feedback capacity region of this channel was the cut-setbound. We used the dependence balance bound to improve upon the cut-set bound at allpoints in the capacity region of this channel where feedback increases capacity. Our resultis somewhat surprising once it is realized that the channel we considered in this paper is thediscrete version of the two-user Gaussian MAC-FB considered by Ozarow in [2] where thecut-set bound was shown to be tight.Our outer bound is difficult to evaluate due to an involved auxiliary random variable T . For binary inputs, the cardinality bound on T is |T | ≤ eferences [1] A. P. Hekstra and F. M. J. Willems. Dependence balance bounds for single outputtwo-way channels. IEEE Trans. on Information Theory , 35(1):44–53, January 1989.[2] L. Ozarow. The capacity of the white Gaussian multiple access channel with feedback.
IEEE Trans. on Information Theory , 30(4):623–629, July 1984.[3] T. M. Cover and C. S. K. Leung. An achievable rate region for the multiple accesschannel with feedback.
IEEE Trans. on Information Theory , 27(3):292–298, May 1981.[4] G. Kramer. Feedback strategies for a class of two-user multiple access channels withfeedback.
IEEE Trans. on Information Theory , 45(6):2054–2059, September 1999.[5] N. Gaarder and J. Wolf. The capacity region of a multiple-access discrete memorylesschannel can increase with feedback.
IEEE Trans. on Information Theory , 21:100–102,Jan 1975.[6] J. P. M. Schalkwijk and T. Kailath. A coding scheme for additive noise channels withfeedback-Part I: No bandwidth constraint.
IEEE Trans. on Information Theory , 12:172–182, April 1966.[7] G. Kramer.
Directed Information for Channels with Feedback . Ph.D. dissertation, SwissFederal Institute of Technology (ETH), Zurich, Switzerland, 1998.[8] S. I. Bross and A. Lapidoth. An improved achievable rate region for the discrete memo-ryless two-user multiple-access channel with noiseless feedback.
IEEE Trans. on Infor-mation Theory , 51(3):811–833, March 2005.[9] F. M. J. Willems. The feedback capacity region of a class of discrete memoryless multipleaccess channels.
IEEE Trans. on Information Theory , 28(1):93–95, January 1982.[10] G. Kramer. Capacity results for the discrete memoryless network.
IEEE Trans. onInformation Theory , 49(1):4–21, Jan. 2003.[11] A. J. Vinck, W. L. M. Hoeks, and K. A. Post. On the capacity of the two-user M-arymultiple-access channel with feedback.
IEEE Trans. on Information Theory , 31(4):540–543, July 1985.[12] F. Willems. On multiple access channels with feedback.
IEEE Trans. on InformationTheory , 30(6):842–845, November 1984.[13] T. M. Cover and J. A. Thomas.
Elements of Information Theory . New York:Wiley,1991.[14] S. Boyd and L. Vandenberghe.
Convex Optimization . Cambridge University Press, 2004.30 R Cut−set boundMAC with no FBCover−Leung boundDB with Z=X DB with Z=X Figure 4 .
1: Illustration of our bounds for the capacity of binary additive noisy MAC-FB. R Cut−set boundMAC with no FBCover−Leung boundDB with Z=X DB with Z=X Figure 4 .
2: An enlarged illustration of the portion of Figure 4 . R Cut−set boundCapacity region with FBCapacity region without FB
Figure 5 .
1: Illustration of the capacity region of binary erasure MAC-FB. R Cut−set boundCapacity region with FBCapacity region without FB
Figure 5 .
2: An enlarged illustration of the portion of Figure 5 ..