Cooperative Strategies for the Half-Duplex Gaussian Parallel Relay Channel: Simultaneous Relaying versus Successive Relaying
Seyed Saeed Changiz Rezaei, Shahab Oveis Gharan, Amir K. Khandani
aa r X i v : . [ c s . I T ] O c t Cooperative Strategies for the Half-DuplexGaussian Parallel Relay Channel:Simultaneous Relaying versus SuccessiveRelaying
Seyed Saeed Changiz Rezaei, Shahab Oveis Gharan, and Amir K. Khandani
Coding & Signal Transmission LaboratoryDepartment of Electrical & Computer EngineeringUniversity of WaterlooWaterloo, ON, N2L 3G1sschangi, shahab, [email protected]
Abstract
This study investigates the problem of communication for a network composed of two half-duplex parallel relayswith additive white Gaussian noise. Two protocols, i.e.,
Simultaneous and
Successive relaying, associated with twopossible relay orderings are proposed. The simultaneous relaying protocol is based on
Dynamic Decode and Forward(DDF) scheme. For the successive relaying protocol: (i) a
Non-Cooperative scheme based on the
Dirty Paper Coding(DPC) , and (ii) a
Cooperative scheme based on the
Block Markov Encoding (BME) are considered. Furthermore, thecomposite scheme of employing BME at one relay and DPC at another always achieves a better rate when comparedto the
Cooperative scheme. A “Simultaneous-Successive Relaying based on Dirty paper coding scheme” (SSRD) isalso proposed. The optimum ordering of the relays and hence the capacity of the half-duplex Gaussian parallel relaychannel in the low and high signal-to-noise ratio (SNR) scenarios is derived. In the low SNR scenario, it is revealedthat under certain conditions for the channel coefficients, the ratio of the achievable rate of the simultaneous relayingbased on DDF to the cut-set bound tends to be 1. On the other hand, as SNR goes to infinity, it is proved thatsuccessive relaying, based on the DPC, asymptotically achieves the capacity of the network.
I. I
NTRODUCTION
A. Motivation
The continuous growth in wireless communication has motivated information theoretists to extend shannon’sinformation theoretic arguments for a single user channel to the scenarios that involve communication amongmultiple users.
Financial supports provided by Nortel, and the corresponding matching funds by the Federal government: Natural Sciences and EngineeringResearch Council of Canada (NSERC) and Province of Ontario: Ontario Centres of Excellence (OCE) are gratefully acknowledged.
In this regard, cooperative wireless communication has been the focus of attention during recent years. Dueto rapid decrease of the transmitted signal power with distance, the idea of multi-hopped communication hasbeen proposed. In multi-hopped communication, some intermediate nodes as relays are exploited to facilitate datatransmission from the source to the destination. Using this technique leads to saving battery power as well asincreasing the physical coverage area. Moreover, relays by emulating distributed transmit antenna, can form spatialdiversity and combat the multi-path fading effect of the wireless media.Motivated by practical constraints, half-duplex relays which cannot transmit and receive at the same time and inthe same frequency band are of great importance. Here, our goal is to study and analyze the performance limits ofa half-duplex parallel relay channel.
B. History
Relay channel is a three terminal network which was introduced for the first time by Van der Meulen in 1971[1]. The most important capacity results of the relay channel were reported by Cover and El Gamal [2]. Tworelaying strategies are proposed in [2]. In one strategy, the relay decodes the transmitted message and forwards there-encoded version to the destination, while in another one the relay does not decode the message, but sends thequantized received values to the destination.Moreover, several works on multi-relay channels exist in the literature (See [3]–[11], [23], [29]–[36]). Scheinin [3], [4] establishes upper and lower bounds on the capacity of a full-duplex parallel relay channel in which thechannel consists of a source, two relays and a destination, where there is no direct link between the source and thedestination, and also between the two relays. Generally, the best rate reported for the full-duplex Gaussian parallelrelay channel is based on the Decode-Forward (DF) or Amplify-Forward (AF) schemes, with time sharing [3], [4].Xie and Kumar generalize the block Markov encoding scheme in [2] for a network of multiple relays [5]. Gastpar,Kramer, and Gupta extend compress and forward scheme to a multiple relay channel by introducing the conceptof antenna polling in [6]–[8]. In [9], Amichai, Shamai, Steinberg and Kramer consider a parallel relay setup, inwhich a nomadic source sends its information to a remote destination via some relays with lossless links to thedestination. They investigate the case that these relays do not have any decoding capability, so signals received at therelays must be compressed. The authors also fully characterize the capacity of this case for the Gaussian channel.In [10], Maric and Yates investigate DF and AF schemes in a parallel-relay network. Motivated by applications insensor networks, they assume large bandwidth resources allowing orthogonal transmissions at different nodes. Theycharacterize optimum resource allocation for AF and DF and show that the wide-band regime minimizes the energycost per information bit in DF, while AF should work in the band-limited regime to achieve the best rate. Razaghiand Yu in [11] propose a parity-forwarding scheme for full-duplex multiple relay. They show that parity-forwardingcan achieve the capacity in a new form of degraded relay networks.Radios that can receive and transmit simultaneously in the same frequency band require complex and expensivecomponents [18]. Hence, Khojastepour and Aazhang in [13], [14] call the half-duplex relay as “
Cheap Relay ”.Recently, half-duplex relaying has drawn a great deal of attention (See [13]–[19], [23], [29]–[36]). Zahedi and
DRAFT
El Gamal consider two different cases of frequency division Gaussian relay channel, deriving lower and upperbounds on the capacity [15]. They also derive single letter characterization of the capacity of frequency divisionadditive white Gaussian noise (AWGN) relay channel with simple linear relaying scheme [16], [17]. The problemof time division relaying is also considered by Host-Madsen and Zhang [18]. By considering fading scenarios,and assuming channel state information (CSI), they study upper and lower bounds on the outage capacity and theErgodic capacity. In [19], Liang and Veeralli present a Gaussian orthogonal relay model, in which the relay-to-destination channel is orthogonal to the source-to-relay and source-to-destination channel. They show that when thesource-to-relay channel is better than the source-to-destination channel and the signal-to-noise ratio (SNR) of therelay-to-destination is less than a given threshold, optimizing resource allocation causes the lower and the upperbounds to coincide with each other.
C. Contributions and Relation to Previous Works
In this paper, we study transmission strategies for a network with a source, a destination, and two half-duplexrelays with additive white Gaussian noise which cooperate with each other to facilitate data transmission from thesource to the destination. Furthermore, it is assumed that no direct link exists between the source and the destination.Half-duplex relaying, in multiple relay networks, is studied in [23], [29]–[36]. Gastpar in [23] shows that in aGaussian parallel relay channel with infinite number of relays, the optimum coding scheme is AF. Rankov andWittneben in [29], [30] further study the problem of half-duplex relaying in a two-hop communication scenario.In their study, they also consider a parallel relay setup with two relays where there is no direct link between thesource and the destination, while there exists a link between the relays. Their relaying protocols are based on eitherAF or DF, in which the relays successively forward their messages from the source to the destination. We call thisprotocol “
Successive Relaying ” in the sequel. Xue and Sandhu in [31] further study different half-duplex relayingprotocols for the Gaussian parallel relay channel. Since they assume that there is no link between the relays, theyrefer to their parallel channel as a
Diamond Relay Channel .In this work, our primary objective is to find the best ordering of the relays in the intended set-up. We considertwo relaying protocols, i.e., simultaneous relaying versus successive relaying, associated with two possible relayorderings. For simultaneous relaying, each relay exploits “Dynamic DF (DDF)”. It should be noted that the DDFscheme considered here is slightly different from the DDF introduced in [34] and [35]. In those works, the DDFscheme is applied to the set-up of the multiple relay network in which the nodes only have the CSI of their receivingchannel. In the DDF scheme described in [34], the source is broadcasting the message to all the network nodesduring whole period of transmission and each relay, listens to the transmitted signal of the source and other relaysuntil it can decode the transmitted message. Consequently, it transmits its signal coherently with the source andother active relays in the remaining time. However, in our set-up, all the nodes are assumed to have all the channelcoefficients. Therefore, in a fixed pre-assigned portion of the time, the relays receive the signal transmitted fromthe source, and in the remaining time slot they transmit the re-encoded version of the decoded message together.In other words, the relays operate in a synchronous manner.
DRAFT
For successive relaying, we study a
Non-Cooperative scheme based on “Dirty Paper Coding (DPC)” and also a
Cooperative scheme based on “Block Markov Encoding (BME)”. It is worth noting that the authors in [36] alsopropose successive relaying protocol for the set up with two parallel relays and direct links between the relays andbetween the source and the destination. They propose a simple repetition coding at the relays, and show that theirscheme can recover the loss in the multiplexing gain, while achieving diversity gain of 2.We derive the optimum relay ordering in low and high SNR scenarios. In low SNR scenarios and under certainchannel conditions, we show that the ratio of the achievable rate of DDF for simultaneous relaying to the cut-setbound tends to one. On the other hand, in high SNR scenarios, we prove that the proposed DPC for successiverelaying asymptotically achieves the capacity.After this work was completed, we became aware of [32] which has independently proposed an achievable ratebased on the combination of superposition coding, BME and DPC. In their scheme, the intended message “ w ” issplit into a message which is transmitted to the destination by exploiting cooperation between the relays “ w r ” anda message which is transmitted to the destination without using any cooperation between the relays “ w d ” . Hence,the signal associated with “ w d ” , transmitted by one relay, can be considered as interference on the other relay. “ w r ” is transmitted by using BME and “ w d ” is transmitted by employing DPC. Therefore, in their general scheme,the associated signals with these two messages are superimposed and transmitted. As the channel between the tworelays become strong, their proposed scheme is converted to BME. On the other hand, as the channel becomesweak, their proposed scheme becomes DPC.Unlike [32], in which the authors only consider successive relaying and propose a combined BME and DPC, asthe main result of this paper, simultaneous and successive relaying protocols are combined and a “Simultaneous-Successive Relaying based on Dirty paper coding” (SSRD) scheme with a new achievable rate is proposed. It isshown that in the low SNR scenario and under certain channel conditions, SSRD scheme is converted to simultaneousrelaying based on DDF, while in the high SNR scenarios, when the ratio of the relay powers to the source powerremain constant, it becomes successive relaying based on DPC (to achieve the capacity).Besides this main result, some other results obtained in this paper are as follows: • Two different types of decoding, i.e., successive and backward decoding, at the destination for the BME schemeare proposed. We prove that the achievable rate of BME with backward decoding is greater than that of BMEwith successive decoding, i.e., C lowBME back ≥ C lowBME succ . • It is proved that BME with backward decoding leads to a simple strategy in which at most, one of the relays isrequired to cooperate with the other relay in sending the bin index of the other relay’s message. Accordingly,in the Gaussian case, the combination of BME at one relay and DPC at the other relay always achieves abetter rate than the simple BME. • In the degraded case, where the destination receives a degraded version of the received signals at the relays,BME with backward decoding achieves the successive cut-set bound.The rest of the paper is organized as follows: In section II, the system model is introduced. In section III, theachievable rates and coding schemes for a half-duplex relay network are derived. Optimality results are discussed
DRAFT in section IV. Simulation results are presented in section V. Finally, section VI concludes the paper.
D. Notation
Throughout the paper, the superscript H stands for matrix operation of conjugate transposition. Lowercase boldletters and regular letters represent vectors and scalars, respectively. For any two functions f ( n ) and g ( n ) , f ( n ) = O ( g ( n )) is equivalent to lim n →∞ (cid:12)(cid:12)(cid:12) f ( n ) g ( n ) (cid:12)(cid:12)(cid:12) < ∞ , and f ( n ) = Θ( g ( n )) is equivalent to lim n →∞ f ( n ) g ( n ) = c , where < c < ∞ . And C ( x ) , log (1 + x ) . Furthermore, for the sake of brevity, A ( n ) ǫ denotes the set of weakly jointlytypical sequences for any intended set of random variables.II. S YSTEM M ODEL
We consider a Gaussian network which consists of a source, two half-duplex relays, and a destination, and thereis no direct link between the source and the destination. Here we define four time slots according to the transmittingand receiving mode of each relay (See Fig. ?? ), where t b denotes the duration of time slot b ( P b =1 t b = 1 ). Nodes0, 1, 2, and 3 represent the source, relay 1, relay 2, and the destination, respectively. Moreover, the transmittingand receiving signals at node a during time slot b are represented by x ( b ) a and y ( b ) a , respectively. Hence, at eachnode c ∈ { , , } , we have y ( b ) c = X a ∈{ , , } h ac x ( b ) a + z ( b ) c . (1)where h ac, s denote channel coefficients from node a to node c , and z ( b ) c is the AWGN term with zero mean andvariance of “1” per dimension.Noting the transmission strategies in Fig. ?? , we have y (1)1 = h x (1)0 + h x (1)2 + z (1)1 , (2) y (1)3 = h x (1)2 + z (1)3 , (3) y (2)2 = h x (2)0 + h x (2)1 + z (2)2 , (4) y (2)3 = h x (2)1 + z (2)3 , (5) y (3) k = h k x (3)0 + z (3) k , k ∈ { , } , (6) y (4)3 = X k =1 h k x (4) k + z (4)3 . (7)Throughout the paper, we assume that h ≥ h unless specified otherwise, and from reciprocity assumption,we have h = h . Furthermore, the power constraints P , P , and P should be satisfied for the source, thefirst relay, and the second relay, respectively. Hence, denoting the power consumption of node a at time slot b by P ( b ) a = E h x ( b ) Ha x ( b ) a i , we have P (1)0 + P (2)0 + P (3)0 = P , (8) P (2)1 + P (4)1 = P ,P (1)2 + P (4)2 = P . DRAFT
Source DestinationRelay 1Relay 2 h h h the vectors x (1)0 and x (1)2 .The first relay and the destination receive y (1)1 and y (1)3 , respectively.The source and the second relay transmita) Time slot 1 with duration t : The source and the first relay transmitthe vectors x (2)0 and x (2)1 .The second relay and the destination receive y (2)2 and y (2)3 , respectively.Source DestinationRelay 1Relay 2 h h h b) Time slot 2 with duration t :Source DestinationRelay 1Relay 2 h h The source transmits the vector x (3)0 .The first and the second relay receive y (3)1 and y (3)2 , respectively.c) Time slot 3 with duration t : Source DestinationRelay 1Relay 2 h h The destination receives y (4)3 .The relays transmit the vectors x (4)1 and x (4)2 .d) Time slot 4 with duration t : Fig. 1.
System Model.
III. A
CHIEVABLE R ATES AND C ODING S CHEMES
In this section, we propose two cooperative protocols, i.e.
Successive and
Simultaneous relaying protocols, for ahalf-duplex Gaussian parallel relay channel.
A. Successive Relaying Protocol In Successive relaying protocol, the relays are not allowed to receive and transmit simultaneously, i.e. t = t = 0 ,and the relations between the transmitted and the received signals at the relays and at the destination follow from(2)-(5). For the successive relaying protocol, we propose a Non-Cooperative and a
Cooperative Coding scheme inthe sequel. In the proposed schemes, the time is divided into odd and even time slots with the duration t and t ,respectively. Accordingly, at each odd and even time slots, the source transmits a new message to one of the relays,and the destination receives a new message from the other relay, successively (See Fig. 2). DRAFT R (2) R (1) R (1) R (2) R (2) R (1) Fig. 2.
Information flow transfer for successive relaying protocol for two relays.
1) Non-Cooperative Coding:
In the Non-Cooperative Coding scheme, each relay considers the other’s signal asinterference. Since the source knows each relay’s message, it can apply the Gelfand-Pinsker’s coding scheme totransmit its message to the other relay. The following Theorem gives the achievable rate of this scheme.
Source DestinationTime Slot 2 with duration t R (2) R (1) Source DestinationTime Slot 1 with duration t R (1) R (2) Fig. 3.
Successive relaying protocol based on Non-Cooperative Coding.
Theorem 1
For the half-duplex parallel relay channel, assuming successive relaying, the following rate C lowDP C isachievable: C lowDP C = max ≤ t ,t ,t + t =1 R (1) + R (2) , (9) subject to: R (1) ≤ min (cid:16) t ( I ( U (1)0 ; Y (1)1 ) − I ( U (1)0 ; X (1)2 )) , t I ( X (2)1 ; Y (2)3 ) (cid:17) , (10) R (2) ≤ min (cid:16) t ( I ( U (2)0 ; Y (2)2 ) − I ( U (2)0 ; X (2)1 )) , t I ( X (1)2 ; Y (1)3 ) (cid:17) . (11) with probabilities: p ( x (1)2 , u (1)0 , x (1)0 ) = p ( x (1)2 ) p ( u (1)0 | x (1)2 ) p ( x (1)0 | u (1)0 , x (1)2 ) ,p ( x (2)1 , u (2)0 , x (2)0 ) = p ( x (2)1 ) p ( u (2)0 | x (2)1 ) p ( x (2)0 | u (2)0 , x (2)1 ) . DRAFT
Proof:
See Appendix A.From Theorem 1, the achievable rate of the proposed scheme for the Gaussian case can be obtained as follows. corollary 1
For the half-duplex Gaussian parallel relay channel, assuming successive relaying protocol with powerconstraint at the source and at each relay, DPC achieves the following rate: C lowDP C = max (cid:16) R (1) + R (2) (cid:17) , (12) subject to: R (1) ≤ min t C h P (1)0 t ! , t C (cid:18) h P t (cid:19)! ,R (2) ≤ min t C h P (2)0 t ! , t C (cid:18) h P t (cid:19)! ,P (1)0 + P (2)0 = P ,t + t = 1 , ≤ t , t , P (1)0 , P (2)0 . Proof:
From Costa’s Dirty Paper Coding [28], by having U (1)0 = X (1)0 + h h P (1)0 h P (1)0 + t X (1)2 , (13) U (2)0 = X (2)0 + h h P (2)0 h P (2)0 + t X (2)1 . (14)where X (1)0 ∼ N (0 , P (1)0 ) , X (2)0 ∼ N (0 , P (2)0 ) , X (1)2 ∼ N (0 , P ) , and X (2)1 ∼ N (0 , P ) , and applying them toTheorem 1, we obtain corollary 1. (ˆ s ( b (cid:0) ; ˆ w ( b (cid:0) ) x (2)1 ( w ( b (cid:0) | s ( b (cid:0) ) ;u (2)1 ( s ( b (cid:0) )(ˆ w ( b (cid:0) ; ˆ w ( b ) ) x (2)0 ( w ( b ) | w ( b (cid:0) ; s ( b (cid:0) )(ˆ w ( b (cid:0) ; ˆ w ( b ) ) x (1)2 ( w ( b (cid:0) | s ( b (cid:0) ) ; u (1)2 ( s ( b (cid:0) )(ˆ s ( b (cid:0) ; ˆ w ( b (cid:0) ) x (1)0 ( w ( b ) | w ( b (cid:0) ;s ( b (cid:0) ) Fig. 4.
Successive relaying protocol based on Cooperative Coding.
2) Cooperative Coding:
In this type of coding scheme, we assume that, at each time slot, the receiving relaydecodes not only the new transmitted message from the source, but also the previous message transmitted from thetransmitting relay (See Figs. 2 and 4). Our proposed coding scheme is based on binning, superposition coding, andBlock Markov Encoding. The source sends B messages w (1) , w (2) , · · · , w ( B ) in B + 2 time slots.Generally, this scheme can be described as follows (See Figs. 4 and 5). In time slot b , the relay ( b + 1) mod 2 +1 decodes the transmitted messages w ( b ) and w ( b − from the source and the other relay, respectively. In time slot DRAFT x (1)0 ( w (3) | w (2) ; s (1)1 ) x (1)0 ( w (1) | ; x (1)2 (1 | ; u (1)2 (1) x (2)0 ( w (2) | w (1) ; x (1)2 ( w (2) | s (1)1 ) ; u (1)2 ( s (1)1 ) x (2)1 ( w (3) | s (2)2 ) ; u (2)1 ( s (2)2 ) x (2)0 ( w (4) | w (3) ; s (2)2 ) x (2)1 ( w (1) | ; u (2)1 (1)Relay 1SourceRelay 2 Block 2Block 1 Block 3 Block 4 Fig. 5.
Decode-and-forward for successive relaying protocol. b + 1 , it broadcasts w ( b ) and the bin index of w ( b − , s ( b − b +2) mod , to the destination using the binning functiondefined next. Definition (The Binning Function):
The binning function f (( b +1) mod Bin ( w ( b − ) : W = { , , · · · , nR (( b +1) mod }−→ { , , . . . , nr (( b +1) mod Bin } is defined by f (( b +1) mod Bin ( w ( b − ) = s ( b − b +1) mod , where f (( b +1) mod Bin ( . ) assigns a randomly uniform distributed integer between 1 and nr (( b +1) mod Bin independently to each member of W .As indicated in Fig. 5, in the first time slot, the source transmits the codeword x (1)0 ( w (1) | , to the first relay,while the second relay transmits a doubly indexed codeword x (1)2 (1 | and the codeword u (1)2 (1) to the first relayand to the destination. In the second time slot, the source transmits the codeword x (2)0 ( w (2) | w (1) , to the secondrelay, and having decoded the message w (1) , the first relay broadcasts the codewords x (2)1 ( w (1) | and u (2)1 (1) tothe second relay and to the destination. It should be noted that the destination cannot decode the message w (1) atthe end of this time slot; however, the second relay decodes w (1) and w (2) messages. Using the binning function, itfinds the bin index of w (1) according to s (1)1 = f (1) Bin ( w (1) ) . In the third time slot, the source transmits the codeword x (1)0 ( w (3) | w (2) , s (1)1 ) to the first relay, and the second relay broadcasts the codewords x (1)2 ( w (2) | s (1)1 ) and u (1)2 ( s (1)1 ) to the first relay and to the destination.Two types of decoding can be used at the destination: successive decoding and backward decoding. Successivedecoding at the destination can be described as follows. At the end of the b th time slot, the destination cannot decodethe message w ( b − ; however, having decoded the bin index s ( b − b +1) mod from the received vector of the b th timeslot, it can decode the message w ( b − from s ( b − b +1) mod and the received vector of the ( b − th time slot. Onthe other hand, backward decoding can be explained as follows. Having received the sequence of the B + 2 ’th timeslot, the final destination starts decoding the intended messages. In the time slot B + 2 , one of the relays transmitsthe dummy message “1” along with the bin index of the message w ( B ) to the destination. Having received this binindex, the destination decodes it, and then backwardly decodes messages w ( b ) , b = B, B − , · · · , and their binindices. The following Theorem gives the achievable rate of the proposed scheme. Theorem 2
For the half-duplex parallel relay channel, assuming successive relaying, the BME scheme achieves
DRAFT0 the rates C lowBME succ and C lowBME back using successive and backward decoding, respectively: C lowBME succ = R (1) + R (2) ≤ max ≤ t ,t ,t + t =1 min (min (cid:16) t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 , U (1)2 (cid:17) , t I (cid:16) X (2)1 ; Y (2)3 | U (2)1 (cid:17) + t I (cid:16) U (1)2 ; Y (1)3 (cid:17)(cid:17) +min (cid:16) t I (cid:16) X (1)2 ; Y (1)3 | U (1)2 (cid:17) + t I (cid:16) U (2)1 ; Y (2)3 (cid:17) , t I (cid:16) X (2)0 ; Y (2)2 | X (2)1 , U (2)1 (cid:17)(cid:17) ,t I (cid:16) X (1)0 , X (1)2 ; Y (1)1 | U (1)2 (cid:17) , t I (cid:16) X (2)0 , X (2)1 ; Y (2)2 | U (2)1 (cid:17)(cid:17) . (15) with probabilities p ( x (1)0 , x (1)2 , u (1)2 ) = p ( u (1)2 ) p ( x (1)2 | u (1)2 ) p ( x (1)0 | x (1)2 , u (1)2 ) ,p ( x (2)0 , x (2)1 , u (2)1 ) = p ( u (2)1 ) p ( x (2)1 | u (2)1 ) p ( x (2)0 | x (2)1 , u (2)1 ) ,p ( x (1)2 , u (1)2 ) = p ( u (1)2 ) p ( x (1)2 | u (1)2 ) ,p ( x (2)1 , u (2)1 ) = p ( u (2)1 ) p ( x (2)1 | u (2)1 ) .C lowBME back = R (1) + R (2) ≤ max ≤ t ,t ,t + t =1 min (cid:16) t I (cid:16) X (1)0 , X (1)2 ; Y (1)1 (cid:17) , t I (cid:16) X (2)0 , X (2)1 ; Y (2)2 (cid:17) ,t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 (cid:17) + t I (cid:16) X (2)0 ; Y (2)2 | X (2)1 (cid:17) ,t I (cid:16) X (1)2 ; Y (1)3 (cid:17) + t I (cid:16) X (2)1 ; Y (2)3 (cid:17)(cid:17) . (16) with probabilities p ( x (1)0 , x (1)2 ) = p ( x (1)2 ) p ( x (1)0 | x (1)2 ) ,p ( x (2)0 , x (2)1 ) = p ( x (2)1 ) p ( x (2)0 | x (2)1 ) . Proof:
See Appendix B.Now, the following set of propositions and corollaries investigate the Non-Cooperative and Cooperative schemesand compare them with each other.
Proposition 1
The BME with backward decoding achieves a better rate than the one with successive decoding,i.e., C lowBME back ≥ C lowBME succ .Proof: For the first term of minimization (15), we have min (cid:16) t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 , U (1)2 (cid:17) , t I (cid:16) X (2)1 ; Y (2)3 | U (2)1 (cid:17) + t I (cid:16) U (1)2 ; Y (1)3 (cid:17)(cid:17) +min (cid:16) t I (cid:16) X (1)2 ; Y (1)3 | U (1)2 (cid:17) + t I (cid:16) U (2)1 ; Y (2)3 (cid:17) , t I (cid:16) X (2)0 ; Y (2)2 | X (2)1 , U (2)1 (cid:17)(cid:17) ≤ min (cid:16) t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 , U (1)2 (cid:17) + t I (cid:16) X (2)0 ; Y (2)2 | X (2)1 , U (2)1 (cid:17) ,t I (cid:16) X (1)2 , U (1)2 ; Y (1)3 (cid:17) + t I (cid:16) X (2)1 , U (2)1 ; Y (2)3 (cid:17)(cid:17) . (17) DRAFT1
Let us focus on t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 , U (1)2 (cid:17) + t I (cid:16) X (2)0 ; Y (2)2 | X (2)1 , U (2)1 (cid:17) : t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 , U (1)2 (cid:17) + t I (cid:16) X (2)0 ; Y (2)2 | X (2)1 , U (2)1 (cid:17) ( a ) = t H (cid:16) Y (1)1 | X (1)2 , U (1)2 (cid:17) − t H (cid:16) Y (1)1 | X (1)0 , X (1)2 (cid:17) + t H (cid:16) Y (2)2 | X (2)1 , U (2)1 (cid:17) − t H (cid:16) Y (2)2 | X (2)0 , X (2)1 (cid:17) ( b ) ≤ t H (cid:16) Y (1)1 | X (1)2 (cid:17) − t H (cid:16) Y (1)1 | X (1)0 , X (1)2 (cid:17) + t H (cid:16) Y (2)2 | X (2)1 (cid:17) − t H (cid:16) Y (2)2 | X (2)0 , X (2)1 (cid:17) ( c ) = t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 (cid:17) + t I (cid:16) X (2)0 ; Y (2)2 | X (2)1 (cid:17) . (18) ( a ) and ( c ) follow from the definition of mutual information, the fact that U (1)2 −→ (cid:16) X (1)0 , X (1)2 (cid:17) −→ Y (1)1 and U (2)1 −→ (cid:16) X (2)0 , X (2)1 (cid:17) −→ Y (2)2 form Markov chain, and ( b ) follows from the fact that conditioning reducesentropy. Inequality ( b ) becomes equality if p ( x (1)0 , x (1)2 , u (1)2 ) = p ( u (1)2 ) p ( x (1)2 ) p ( x (1)0 | x (1)2 ) and p ( x (2)0 , x (2)1 , u (2)1 ) = p ( u (2)1 ) p ( x (2)1 ) p ( x (2)0 | x (2)1 ) . Using the similar argument for t I (cid:16) X (1)2 , U (1)2 ; Y (1)3 (cid:17) + t I (cid:16) X (2)1 , U (2)1 ; Y (2)3 (cid:17) , t I (cid:16) X (1)0 , X (1)2 ; Y (1)1 | U (1)2 (cid:17) , and t I (cid:16) X (2)0 , X (2)1 ; Y (2)2 | U (2)1 (cid:17) in (15) and (17), and the fact U (1)2 −→ X (1)2 −→ Y (1)3 , U (2)1 −→ X (2)1 −→ Y (2)3 , U (1)2 −→ (cid:16) X (1)0 , X (1)2 (cid:17) −→ Y (1)1 , U (2)1 −→ (cid:16) X (2)0 , X (2)1 (cid:17) −→ Y (2)2 form Markovchain, and Appendix B, along with comparing C lowBM succ and C lowBM back in Theorem 2, we have C lowBM back ≥ C lowBM succ .From Theorem 2, we have the following corollary for the Gaussian case. corollary 2 For the half-duplex Gaussian parallel relay channel, assuming successive relaying protocol with powerconstraint at the source and each relay, BME achieves the following rates C lowBME succ = max min (cid:0) C lowBME + C lowBME ,t C h P (1)0 + h θ P + 2 h h q ¯ α θ P (1)0 P t ,t C h P (2)0 + h θ P + 2 h h q ¯ α θ P (2)0 P t . (19) C lowBME back = max min t C h P (1)0 + h P + 2 h h q ¯ β P (1)0 P t ,t C h P (2)0 + h P + 2 h h q ¯ β P (2)0 P t ,t C h β P (1)0 t ! + t C h β P (2)0 t ! ,t C (cid:18) h P t (cid:19) + t C (cid:18) h P t (cid:19)(cid:19) . (20) DRAFT2 subject to: C lowBME = min t C h α P (1)0 t ! , t C (cid:18) h ¯ θ P h θ P + t (cid:19) + t C (cid:18) h θ P t (cid:19)! , (21) C lowBME = min t C h α P (2)0 t ! , t C (cid:18) h ¯ θ P h θ P + t (cid:19) + t C (cid:18) h θ P t (cid:19)! , (22) P (1)0 + P (2)0 = P ,t + t = 1 , ≤ α , α ≤ , ≤ β , β ≤ , ≤ θ , θ ≤ . where ¯ θ i = 1 − θ i , ¯ α i = 1 − α i , and ¯ β i = 1 − β i for i = 1 , .Proof: Let V (1)0 ∼ N (0 , α P (1)0 ) , V (2)0 ∼ N (0 , α P (2)0 ) , V (1)2 ∼ N (0 , θ P ) , V (2)1 ∼ N (0 , θ P ) , U (1)2 ∼N (0 , ¯ θ P ) and U (2)1 ∼ N (0 , ¯ θ P ) , which are independent of each other.Letting X (1)0 = V (1)0 + r ¯ α P (1)0 θ P V (1)2 , X (2)0 = V (2)0 + r ¯ α P (2)0 θ P V (2)1 , X (1)2 = V (1)2 + U (1)2 , X (2)1 = V (2)1 + U (2)1 and using the result in the expression for the achievable rate obtained in Theorem 1, we obtain C lowBME succ for theGaussian case, as given in [32] and (19), (21), and (22), respectively.For backward decoding, let V (1)0 ∼ N (0 , β P (1)0 ) , V (2)0 ∼ N (0 , β P (2)0 ) , X (1)2 ∼ N (0 , P ) , and X (2)1 ∼N (0 , P ) , which are independent of each other. By setting X (1)0 = V (1)0 + r ¯ β P (1)0 P X (1)2 , X (2)0 = V (2)0 + r ¯ β P (2)0 P X (2)1 and using the result in the expression for the achievable rate obtained in Theorem 1, we obtain C lowBME back for the Gaussian case, as given in (20). Proposition 2
In symmetric scenarios, where h = h , h = h , and P = P , Non-Cooperative DPC schemeoutperforms Cooperative BME scheme, i.e. C lowBME back ≤ C lowDP C .Proof: Due to the symmetric assumption, we have t = t = , P (1)0 = P (2)0 = P , and β = β = . Hence,from (20), we have C lowBME back ≤ min (cid:18) C (cid:18) h P (cid:19) , C (cid:0) h P (cid:1)(cid:19) . (23)And also C lowDP C in (12) becomes C lowDP C = min (cid:18) C (cid:0) h P (cid:1) , C (cid:0) h P (cid:1) + 12 C (cid:0) h P (cid:1) , C (cid:0) h P (cid:1)(cid:19) . (24)Comparing (23) and (24), we have C lowBME back ≤ C lowDP C .According to the discussion in Appendix B, r (1) Bin = 0 or r (2) Bin = 0 . In other words, in the Cooperative BMEscheme based on backward decoding, at most one relay is necessary to use binning function for the message it
DRAFT3 receives from another, and the other relay is not necessary to cooperate with this relay. Therefore, we propose acomposite BME-DPC scheme. In this scheme, one of the relays decodes the other relay’s message. Having decodedthat, it then uses the binning function to cooperate with the other relay. On the other hand, using the Gelfand-Pinsker’s result the source cancels the interference due to one relay on the other. Hence, we have the followingTheorem.
Theorem 3
The composite BME-DPC scheme, achieves the following rate: C lowBME − DP C = max ≤ t ,t ,t + t =1 min (cid:16) t I (cid:16) X (1)0 , X (1)2 ; Y (1)1 (cid:17) , t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 (cid:17) + t (cid:16) I (cid:16) U (2)0 ; Y (2)2 (cid:17) − I (cid:16) U (2)0 ; X (2)1 (cid:17)(cid:17) , t I (cid:16) X (1)2 ; Y (1)3 (cid:17) + t I (cid:16) X (2)1 ; Y (2)3 (cid:17) ,t (cid:16) I (cid:16) U (2)0 ; Y (2)2 (cid:17) − I (cid:16) U (2)0 ; X (2)1 (cid:17)(cid:17) + t I (cid:16) X (2)1 ; Y (2)3 (cid:17)(cid:17) . (25) Proof:
Assuming r (1) Bin = 0 , and using Theorem 1 and Theorem 2 along with a similar argument as in AppendixB, Theorem 3 is immediate. corollary 3
For the Gaussian case, the composite BME-DPC scheme achieves the following rate C lowBME − DP C .Furthermore, C lowBME − DP C ≥ C lowBME back . In other words, the composite BME-DPC scheme always achieves abetter rate than the BME scheme for the Gaussian scenario. C lowBME − DP C = R (1) + R (2) ≤ max min t C h P (1)0 + h P + 2 h h q ¯ αP (1)0 P t ,t C h αP (1)0 t ! + t C h P (2)0 t ! ,t C (cid:18) h P t (cid:19) + t C (cid:18) h P t (cid:19) , t C h P (2)0 t ! + t C (cid:18) h P t (cid:19)! . (26) subject to: P (1)0 + P (2)0 = P ,t + t = 1 , ≤ t , t , P (1)0 , P (2)0 , ≤ α ≤ . where ¯ α = 1 − α .Proof: As in Theorem 3, we assume that r (1) Bin = 0 . Now, we show that every rate pairs (cid:0) R (1) , R (2) (cid:1) satisfying (101)-(107) satisfy (26). After specializing (101)-(107) for the Gaussian case and comparing with (26),one observes that the second term in minimization (101) does not exist. Substituting r (1) Bin = 0 in (102)-(107),
DRAFT4 (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)
Relay 2 h h Destination h h t t Source Relay 1
Fig. 6.
Simultaneous relaying protocol for two relays. one can obtain the other three corresponding terms. Comparing those terms with (26), it can be readily seen that C lowBME − DP C ≥ C lowBME back . Remark 1
Assuming r (1) Bin = 0 , as in Theorem 3 and corollary 3, the destination jointly decodes the current messageand the bin index of the next message at the end of even time slots and then it can decode the next message at theend of odd time slots. Therefore, using backward decoding is not necessary in the BME-DPC scheme.B. Simultaneous Relaying Protocol
Figure 6 shows simultaneous relaying protocol. In simultaneous relaying, in one time slot of duration t thesource transmits its signal simultaneously to the two relays. In the next time slot of duration t , two relays transmittheir signal coherently to the destination. Hence, in this protocol, t = t = 0 and our system model follows from(6) and (7).
1) Dynamic Decode-and-Forward (DDF):
In DDF scheme each relay decodes the transmitted message fromthe source in time slot t (Broadcast (BC) State), and forwards its re-encoded version in time slot t (MultipleAccess (MAC) State). The following Theorem gives the achievable rate of the DDF scheme for the general discretememoryless channels. Theorem 4
For the half-duplex parallel relay channel, assuming simultaneous relaying and the fact that what thesecond relay receives is a degraded version of what the first relay receives, the following rate C lowDDF is achievable: C lowDDF = max ≤ t ,t ,t + t =1 R p + R c , (27) subject to: R p ≤ min (cid:16) t I ( X (3)0 ; Y (3)1 | U (3)0 ) , t I ( X (4)1 ; Y (4)3 | X (4)2 ) (cid:17) , (28) R c ≤ t I ( U (3)0 ; Y (3)2 ) , (29) R p + R c ≤ t I ( X (4)1 , X (4)2 ; Y (4)3 ) . (30) DRAFT5 with probabilities: p ( u (3)0 , x (3)0 ) = p ( u (3)0 ) p ( x (3)0 | u (3)0 ) ,p ( x (4)1 , x (4)2 ) = p ( x (4)1 ) p ( x (4)2 | x (4)1 ) . Proof:
The achievable rate of DDF is equal to C lowDDF = R p + R c , where ( R p , R c ) should be both in thecapacity region of BC (corresponding to the BC state) and MAC (corresponding to the MAC state). Applying thesuperposition coding of the degraded BC [12] the following rates are achievable for the first hop: R p ≤ t I ( X (3)0 ; Y (3)1 | U (3)0 ) ,R c ≤ t I ( U (3)0 ; Y (3)2 ) . (31)with probability p ( u (3)0 , x (3)0 ) = p ( u (3)0 ) p ( x (3)0 | u (3)0 ) .And using the superposition coding of the extended MAC (See [25], [26]) the following rates are achievable forthe second hop: R p ≤ t I ( X (4)1 ; Y (4)3 | X (4)2 ) ,R p + R c ≤ t I ( X (4)1 , X (4)2 ; Y (4)3 ) . (32)with probability p ( x (4)1 , x (4)2 ) = p ( x (4)1 ) p ( x (4)2 | x (4)1 ) .In the Gaussian case (assuming h ≥ h ), the source splits its total available power P to P (3)0 ,p and P (3)0 ,c associated with the “Private” and the “Common” messages, respectively. Letting X (3)0 ∼ N (0 , P ) , U (3)0 ∼N (cid:16) , P (3)0 ,c (cid:17) , and X (4)1 ∼ N (0 , P ) , assuming that relay 1 and relay 2 transmit their codewords associated withthe common message with N (cid:16) , P (4)1 ,c (cid:17) and N (0 , P ) , and using (31) and (32) we have the following corollary. DRAFT6 corollary 4
For the half-duplex Gaussian parallel relay channel, assuming simultaneous relaying protocol withpower constraints at the source and at each relay, DDF achieves the following rate C lowDDF = R p + R c , (33) subject to: R p ≤ min t C h P (3)0 ,p t ! , t C h P (4)1 ,p t !! ,R c ≤ t C h P (3)0 ,c t + h P (3)0 ,p ! ,R p + R c ≤ t C h P (4)1 ,p + (cid:18) h q P (4)1 ,c + h √ P (cid:19) t ,P (3)0 ,p + P (3)0 ,c = P , P (4)1 ,p + P (4)1 ,c = P , t + t = 1 , ≤ t , t , P (3)0 ,p , P (3)0 ,c , P (4)1 ,p , P (4)1 ,c . Interestingly, successive decoding at the destination does not degrade the performance of DDF scheme in theGaussian scenario as shown in the following Proposition.
Proposition 3
The rate of DDF scheme is achievable by successive decoding of the common and private messagesat the destination.Proof:
Consider the sum rate for both the common message and the private message for the extended multipleaccess channel from relays to the destination, R p + R c ≤ t C h P (4)1 ,p + ( h q P (4)1 ,c + h √ P ) t . (34)It can be readily verified that subject to the constraint P (4)1 ,p + P (4)1 ,c = P , the right-hand side of (34) is a decreasingfunction of P (4)1 ,p or equivalently an increasing function of P (4)1 ,c . Now, let us equate R p in (34) with the private rate ´ R p of another MAC which is achieved by successive decoding of common and private messages. Therefore, wehave R p = ´ R p = t C h ´ P (4)1 ,p t ! ≤ t C h P (4)1 ,p t ! . (35)According to (35), we have (See Fig. 7) ´ P (4)1 ,p ≤ P (4)1 ,p = ⇒ R p + R c ≤ ´ R p + ´ R c ,R c ≤ ´ R c . Hence, ( R p , R c ) lies in the corner point of the extended MAC with parameters ( ´ P (4)1 ,p , ´ P (4)1 ,c ) , i.e. successive decodingof common and private messages achieves the DF rate. DRAFT7
Common Rate R c ´ R c ´ R p = R p Private Rate
Fig. 7.
The order of decoding “Common” and “Private” messages.
C. Simultaneous-Successive Relaying Protocol based on Dirty paper coding (SSRD)
Source DestinationRelay 1Relay 2 R R a) Time slot 1 with duration t Source DestinationRelay 1Relay 2 R b) Time slot 2 with duration t R Source DestinationRelay 1Relay 2d) Time slot 4 with duration t ( R ;R )( R ;R )Source DestinationRelay 1Relay 2 R ( R ;R )c) Time slot 3 with duration t Fig. 8.
SSRD Scheme for the Half-Duplex Parallel Relay Channel.
In this section, we propose an achievable rate for the half-duplex parallel relay channel. Our achievable schemeis based on the combination of the successive relaying protocol based on DPC scheme and simultaneous relayingprotocol based on DDF (SSRD scheme). Hence, we have the following Theorem.
Theorem 5
Considering Fig. 8, for the half-duplex parallel relay channel, SSRD scheme achieves the following
DRAFT8 rate C lowSSRD : C lowSSRD =min ( R + R + R + R , R + R + R + R + R ) , (36) subject to: R ≤ R , R + R ≤ R + R , R ≤ R + R . (37) Proof:
SSRD scheme is illustrated in Fig. 8. As indicated in the figure, transmission is performed in 4 timeslots. Relay 1 transmits its private message which was received in time slots t and t (corresponding to rates R and R ) in time slots t and t (corresponding to rates R and R ). On the other hand, relay 2 transmits its privatemessage which has been received in time slot t (corresponding to rate R ) in time slots t and t (correspondingto rates R and R ). Furthermore, the two relays send the common message they have already received in time slot t (corresponding to rate R ) coherently in time slot t (corresponding to rate R ). As observed, here we considerthe private rate for both relays in the MAC state, i.e. time slot t . This is due to the reason that relay 2 also receivesthe private message in time slot t . Hence, from the above description and Fig. 8, we have C lowSSRD =min ( R + R + R + R , R + R + R + R + R ) , (38)subject to: R ≤ R , R + R ≤ R + R , R ≤ R + R . (39)Using corollaries 1, 4, and Proposition 3, for the Gaussian case we have DRAFT9 C lowSSRD =min t C h P (1)0 t ! + t C h P (2)0 t ! + t C h P (3)0 ,p t ! + t C h P (3)0 ,c t + h P (3)0 ,p ! ,t C h P (1)2 t ! + t C h P (2)1 t ! + t C h P (4)1 ,p + h P (4)2 ,p t ! + t C (cid:18) h q P (4)1 ,c + h q P (4)2 ,c (cid:19) t + h P (4)1 ,p + h P (4)2 ,p , (40)subject to: t C (cid:18) h q P (4)1 ,c + h q P (4)2 ,c (cid:19) t + h P (4)1 ,p + h P (4)2 ,p ≤ t C h P (3)0 ,c t + h P (3)0 ,p ! ,t C h P (1)0 t ! + t C h P (3)0 ,p t ! ≤ t C h P (2)1 t ! + t C h P (4)1 ,p t ! ,t C h P (2)0 t ! ≤ t C h P (1)2 t ! + t C h P (4)2 ,p t ! ,P (1)0 + P (2)0 + P (3)0 ,p + P (3)0 ,c = P ,P (2)1 + P (4)1 ,p + P (4)1 ,c = P ,P (1)2 + P (4)2 ,p + P (4)2 ,c = P ,t + t + t + t = 1 , ≤ t , t , t , t , P (1)0 , P (2)0 , P (3)0 ,p , P (3)0 ,c , P (2)1 , P (4)1 ,p , P (4)1 ,c , P (1)2 , P (4)2 ,p , P (4)2 ,c . According to corollary 3, another combined simultaneous-successive relaying protocol based on BME is notnecessary. However, a “Simultaneous-Successive Relaying protocol based on BME-DPC”, can be easily derived.Assuming the first relay decodes the second one’s message, the achievable rate of this new scheme would be thesame as C lowSSRD . However, since the messages for the second relay are common, R in the expression of theachievable rate is zero. Furthermore, the following constraints instead of (39) should be satisfied: R ≤ R + R , R + R ≤ R + R , R + R ≤ t I (cid:16) X (1)0 , X (1)2 ; Y (1)1 (cid:17) . (41)IV. O PTIMALITY R ESULTS
In this section, an upper bound for the half-duplex parallel relay channel is derived and investigated. The authorsin [27] proposed some upper bounds on the achievable rate for general half-duplex multi-terminal networks. Here,we explain their results briefly and apply them to our half-duplex parallel relay network.Authors in [27] define the concept of state for a half-duplex network with N nodes. The state of the network isa valid partitioning of its nodes into two sets of the “sender nodes” and the “receiver nodes” such that there is no DRAFT0 active link that arrives at a sender node , and ˆ t m is the portion of the time that network is used in state m where m ∈ { , , . . . , M } . The following Theorem for the upper bound of the information flow from the subset S to thesubset S of the nodes, where S and S are disjoint is proved in [27]. Theorem 6
For a general half-duplex network with N nodes and a finite number of states, M , the maximumachievable information rates { R ij } from a node set S to a disjoint node set S , S , S ⊂ { , , . . . , N − } , isbounded by X i ∈ S ,j ∈ S R ij ≤ sup p ( x ( m )0 ,x ( m )2 ,...,x ( m ) N − ) , ˆ t m min S M X m =1 ˆ t m I (cid:16) X ( m ) S ; Y ( m ) S | X ( m ) S c (cid:17) . (42) for some joint probability distribution p ( x ( m )0 , x ( m )2 , . . . , x ( m ) N − ) when the minimization is over all the sets S ⊂{ , , . . . , N − } subject to S T S = S , S T S = ∅ and the supremum is over all the non-negative ˆ t m subjectto P Mi =1 ˆ t m = 1 . Here, x ( m ) S , y ( m ) S , and x ( m ) S c denote the signals transmitted and received by nodes in set S , andtransmitted by nodes in set S c , during state m , respectively. From Theorem 6, the maximum achievable rate C low is upper bounded as C low ≤ C up , min (cid:16) ˆ t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 (cid:17) + ˆ t I (cid:16) X (2)0 ; Y (2)2 | X (2)1 (cid:17) + ˆ t I (cid:16) X (3)0 ; Y (3)1 , Y (3)2 (cid:17) , ˆ t I (cid:16) X (2)0 , X (2)1 ; Y (2)2 , Y (2)3 (cid:17) + ˆ t I (cid:16) X (3)0 ; Y (3)2 (cid:17) + ˆ t I (cid:16) X (4)1 ; Y (4)3 | X (4)2 (cid:17) , ˆ t I (cid:16) X (1)0 , X (1)2 ; Y (1)1 , Y (1)3 (cid:17) + ˆ t I (cid:16) X (3)0 ; Y (3)1 (cid:17) + ˆ t I (cid:16) X (4)2 ; Y (4)3 | X (4)1 (cid:17) , ˆ t I (cid:16) X (1)2 ; Y (1)3 (cid:17) + ˆ t I (cid:16) X (2)1 ; Y (2)3 (cid:17) + ˆ t I (cid:16) X (4)1 , X (4)2 ; Y (4)3 (cid:17)(cid:17) , (43)subject to ˆ t + ˆ t + ˆ t + ˆ t = 1 . By setting ˆ t = ˆ t = 0 in (43), we obtain an upper bound on the successive relaying protocol which we call it successive cut-set bound in the sequel. Theorem 7
In a degraded half-duplex parallel relay channel where the destination receives a degraded version ofthe received signals at relays, i.e. X (1)2 −→ Y (1)1 −→ Y (1)3 and X (2)1 −→ Y (2)2 −→ Y (2)3 , BME based on backwarddecoding achieves the successive cut-set bound.Proof: Setting ˆ t = ˆ t = 0 in (43) and comparing the result with (16) the Theorem is proved.In high SNR scenarios, we have the following Theorem. Theorem 8
In high SNR scenarios, assuming non-zero source-relay and relay-destination links, when power avail-able for the source and each relay tends to infinity, time slots ˆ t and ˆ t in (43) tend to zero as O (cid:16) P (cid:17) .Furthermore, the upper bound on the capacity of the half-duplex parallel relay channel in high SNR scenarios is C up = C lowDP C + O (cid:18) P (cid:19) . DRAFT1
In other words, DPC achieves the capacity of a half-duplex Gaussian parallel relay channel as SNR goes to infinity.Proof:
Throughout the proof, we assume the power of the relays goes to infinity as P = γ P , P = γ P where γ , γ are constants independent of the SNR. Substituting X (1)0 ∼ N (0 , ˆ P (1)0 ) , X (2)0 ∼ N (0 , ˆ P (2)0 ) , X (3)0 ∼N (0 , ˆ P (3)0 ) , X (2)1 ∼ N (0 , ˆ P (2)1 ) , X (4)1 ∼ N (0 , ˆ P (4)1 ) , X (1)2 ∼ N (0 , ˆ P (1)2 ) , and X (4)2 ∼ N (0 , ˆ P (4)2 ) in (43), andassuming complete cooperation between the transmitting and receiving nodes for each cut in (43), we have C up ≤ min ˆ t C h ˆ P (1)0 ˆ t ! + ˆ t C h ˆ P (2)0 ˆ t ! + ˆ t C ( h + h ) ˆ P (3)0 ˆ t ! , ˆ t C h ˆ P (2)0 ˆ t + ( h + h ) ˆ P (2)1 ˆ t + 2 h h q ˆ P (2)0 ˆ P (2)1 ˆ t + h h ˆ P (2)0 ˆ P (2)1 ˆ t +ˆ t C h ˆ P (3)0 ˆ t ! + ˆ t C h ˆ P (4)1 ˆ t ! , ˆ t C h ˆ P (1)0 ˆ t + ( h + h ) ˆ P (1)2 ˆ t + 2 h h q ˆ P (1)0 ˆ P (1)2 ˆ t + h h ˆ P (1)0 ˆ P (1)2 ˆ t +ˆ t C h ˆ P (3)0 ˆ t ! + ˆ t C h ˆ P (4)2 ˆ t ! , ˆ t C h ˆ P (1)2 ˆ t ! + ˆ t C h ˆ P (2)1 ˆ t ! +ˆ t C h ˆ P (4)1 + h ˆ P (4)2 + 2 h h q ˆ P (4)1 ˆ P (4)2 ˆ t . (44)subject to: ˆ P (1)0 + ˆ P (2)0 + ˆ P (3)0 = P , ˆ P (2)1 + ˆ P (4)1 = P , ˆ P (1)2 + ˆ P (4)2 = P , ˆ t + ˆ t + ˆ t + ˆ t = 1 , ≤ ˆ t , ˆ t , ˆ t , ˆ t , ˆ P (1)0 , ˆ P (2)0 , ˆ P (3)0 , ˆ P (2)1 , ˆ P (4)1 , ˆ P (1)2 , ˆ P (4)2 . Furthermore, from corollary 1, the achievable rate of the DPC scheme can be expressed as C lowDP C = min t C h P (1)0 t ! + t C h P (2)0 t ! ,t C h P (2)0 t ! + t C (cid:18) h P t (cid:19) ,t C h P (1)0 t ! + t C (cid:18) h P t (cid:19) ,t C (cid:18) h P t (cid:19) + t C (cid:18) h P t (cid:19)(cid:19) . (45) DRAFT2
By setting P (1)0 = P (2)0 = P and t = t = 0 . in (45), expression (45) can be simplified as C lowDP C ≥
12 ln P + c. (46)where c is some constant which depends on channel coefficients. Knowing that the term corresponding to eachcut-set in (44) for the optimum values of ˆ t , · · · , ˆ t is indeed an upper-bound for C lowDP C , and by setting ˆ P (1)0 =ˆ P (2)0 = ˆ P (3)0 = P in (44), we have the following inequality between (46) and the first cut of (44).
12 ln P + c ≤ ˆ t (cid:18) h P ˆ t (cid:19) + ˆ t (cid:18) h P ˆ t (cid:19) + ˆ t (cid:18) ( h + h ) P ˆ t (cid:19) +ˆ t h P + ˆ t h P + ˆ t h + h ) P = (cid:0) − ˆ t (cid:1) P + ˆ t h + ˆ t h + ˆ t (cid:0) h + h (cid:1) − ˆ t t − ˆ t t − ˆ t t + ˆ t h P + ˆ t h P + ˆ t h + h ) P . (47)Note that in deriving (46) and (47), the following inequality is applied to lower/upper-bound the correspondingterms: ln( x ) ≤ ln(1 + x ) ≤ ln( x ) + 1 x , ∀ x > . (48)Consequently, we have ˆ t ≤ P (cid:0) c + ˆ t ln h + ˆ t ln h + ˆ t ln (cid:0) h + h (cid:1) − ˆ t ln ˆ t − ˆ t ln ˆ t − ˆ t ln ˆ t (cid:1) + 1ln P (cid:18) ˆ t h P + ˆ t h P + ˆ t ( h + h ) P (cid:19) . Hence, we can bound the optimum value of ˆ t in (44) as ≤ ˆ t ≤ O (cid:18) P (cid:19) . (49)Similarly, by considering the fourth cut in (44), we can derive another bound on the optimum value of ˆ t as follows: ≤ ˆ t ≤ O (cid:18) P (cid:19) . (50)Applying the inequality between (46) and the term corresponding to the second cut in (44), knowing (from (49)and (50)) the fact that ˆ t ≤ c ln P , and ˆ t ≤ c ln P (where c and c are constants), and using inequalities (48), and ln(1 + x ) ≤ x, ∀ x ≥ , (51) DRAFT3 we obtain
12 ln P + c ≤ ˆ t h h γ P ˆ t t γ h P + ˆ t (cid:0) h + h (cid:1) h h P + ˆ t h h h √ γ P !! +ˆ t (cid:18) h P ˆ t (cid:19) + ˆ t (cid:18) h γ P ˆ t (cid:19) +ˆ t (cid:0) ˆ t h P + ˆ t γ ( h + h ) P + 2ˆ t h h √ γ P + h h γ P (cid:1) +ˆ t h P + ˆ t γ h P ≤ ˆ t ln P + ˆ t (cid:18) h h γ ˆ t (cid:19) + ˆ t γ h P + ˆ t (cid:0) h + h (cid:1) h h P + ˆ t h h h √ γ P + c P ln h − c P ln ˆ t + c c P ln γ h − c P ln ˆ t + c t (cid:0) ˆ t h P + ˆ t γ ( h + h ) P + 2ˆ t h h √ γ P + h h γ P (cid:1) +ˆ t h P + ˆ t γ h P Therefore, we have
12 ln P + c ≤ ˆ t ln P + ´ c + O (cid:18) P (cid:19) + O (cid:18) P (cid:19) . Hence, − c log P ≤ ˆ t . (52)Similarly, from the third cut of (44), for ˆ t we have − c log P ≤ ˆ t . (53)From (52) and (53), and also the fact that ˆ t + ˆ t + ˆ t + ˆ t = 1 , we obtain − c log P ≤ ˆ t ≤
12 + c log P , (54) − c log P ≤ ˆ t ≤
12 + c log P . (55)Hence, from (49), (50), (54), and (55) as P → ∞ , ˆ t , ˆ t → and ˆ t , ˆ t → . . This proves the first part of theTheorem.Moreover, knowing that each term corresponding to the four cuts in (44) is greater than . P ) + c and as ˆ t , ˆ t are strictly above zero (approaching . ), we can easily conclude that ˆ P (1)0 , ˆ P (2)0 , ˆ P (2)1 , ˆ P (1)2 ∼ Θ ( P ) . (56) DRAFT4
Now, we prove that the DPC scheme with the parameters t = ˆ t + ˆ t +ˆ t , t = ˆ t + ˆ t +ˆ t , P (1)0 = ˆ P (1)0 and P (2)0 = ˆ P (2)0 , where ˆ t , · · · , ˆ t , ˆ P (1)0 , ˆ P (2)0 are the parameters corresponding to the maximum value of (44), achievesthe capacity with a gap no more than O (cid:16) P (cid:17) . To prove this, we show that each of the four terms in (45) is nomore than O (cid:16) P (cid:17) below the corresponding term (from the same cut) in (44). To show this, for the first cut wehave ˆ t C h ˆ P (1)0 ˆ t ! + ˆ t C h ˆ P (2)0 ˆ t ! + ˆ t C ( h + h ) ˆ P (3)0 ˆ t ! − t C h P (1)0 t ! − t C h P (2)0 t ! ( a ) ≤ ˆ t h ˆ P (1)0 ˆ t ! + ˆ t h ˆ P (2)0 ˆ t ! + ˆ t C ( h + h ) ˆ P (3)0 ˆ t ! − (cid:18) ˆ t t + ˆ t (cid:19) ln h ˆ P (1)0 t ! − (cid:18) ˆ t t + ˆ t (cid:19) ln h ˆ P (2)0 t ! + ˆ t h ˆ P (1)0 + ˆ t h ˆ P (2)0 ( b ) . ˆ t h ˆ P (1)0 ˆ t ! + ˆ t h ˆ P (2)0 ˆ t ! + ˆ t (cid:18) ( h + h ) P ˆ t + ˆ t (cid:19) − (cid:18) ˆ t t + ˆ t (cid:19) ln h ˆ P (1)0 t ! − (cid:18) ˆ t t + ˆ t (cid:19) ln h ˆ P (2)0 t ! + O (cid:18) P (cid:19) ( c ) . ˆ t P q ˆ P (1)0 ˆ P (2)0 − ˆ t (cid:16) ˆ P (1)0 ˆ P (2)0 (cid:17) + O (cid:18) P (cid:19) ( d ) . O (cid:18) P (cid:19) . (57)Here, ( a ) follows from (48), noting the function ˆ t ln( P − x − y ) + ˆ t ln( y ) + ˆ t ln (cid:0) ˆ t + (cid:0) h + h (cid:1) x (cid:1) takes itsmaximum value at x ≤ ˆ t ˆ t +ˆ t P and hence substituting ˆ P (3)0 = ˆ t ˆ t +ˆ t P and finally noting ˆ P (1)0 , ˆ P (2)0 ∼ Θ( P ) result in ( b ) , ( c ) follows from ˆ t , ˆ t ∼ O (cid:16) P (cid:17) and ln (cid:16) t ˆ t (cid:17) ∼ O (cid:16) P (cid:17) , and finally ( d ) follows from ˆ P (1)0 , ˆ P (2)0 ∼ Θ( P ) . DRAFT5
Next, we bound the difference between the terms in the fourth cut of (44) and the fourth term in C lowDP C ˆ t C h ˆ P (1)2 ˆ t ! + ˆ t C h ˆ P (2)1 ˆ t ! + ˆ t C h ˆ P (4)1 + h ˆ P (4)2 + 2 h h q ˆ P (4)1 ˆ P (4)2 ˆ t − t C (cid:18) h P t (cid:19) − t C (cid:18) h P t (cid:19) ( a ) . ˆ t h ˆ P (1)2 ˆ t ! + ˆ t h ˆ P (2)1 ˆ t ! + ˆ t C h ˆ P (4)1 + h ˆ P (4)2 + 2 h h q ˆ P (4)1 ˆ P (4)2 ˆ t − (cid:18) ˆ t t + ˆ t (cid:19) ln (cid:18) h P t (cid:19) − (cid:18) ˆ t t + ˆ t (cid:19) ln (cid:18) h P t (cid:19) + O (cid:18) P (cid:19) ( b ) . ˆ t (cid:18) h P ˆ t (cid:19) + ˆ t (cid:18) h P ˆ t (cid:19) + ˆ t ln h s P ˆ t + ˆ t + h s P ˆ t + ˆ t ! − (cid:18) ˆ t t + ˆ t (cid:19) ln (cid:18) h P t (cid:19) − (cid:18) ˆ t t + ˆ t (cid:19) ln (cid:18) h P t (cid:19) + O (cid:18) P (cid:19) ( c ) . ˆ t q (ˆ t + ˆ t )(ˆ t + ˆ t ) + h h (ˆ t + ˆ t ) r P P + h (ˆ t + ˆ t ) h r P P − ˆ t P P ) + O (cid:18) P (cid:19) ( d ) . O (cid:18) P (cid:19) . (58)Here, ( a ) follows from (48) and noting ˆ P (2)1 , ˆ P (1)2 ∼ Θ( P ) , noting the function ˆ t ln( P − y ) + ˆ t ln( P − x ) +ˆ t ln (cid:16) ˆ t + (cid:0) h √ x + h √ y (cid:1) (cid:17) takes its maximum value at x ≤ ˆ t ˆ t +ˆ t P , y ≤ ˆ t ˆ t +ˆ t P and hence substituting ˆ P (4)1 = ˆ t ˆ t +ˆ t P and ˆ P (4)2 = ˆ t ˆ t +ˆ t P result in ( b ) , ( c ) follows from ˆ t , ˆ t ∼ O (cid:16) P (cid:17) and ˆ t , ˆ t ∼ . O (cid:16) P (cid:17) , and finally ( d ) follows from the facts that P P ∼ Θ(1) , ˆ t + ˆ t , ˆ t + ˆ t ∼ Θ(1) , and ˆ t ∼ O ( P ) .Next, we bound the difference between the terms in the second cut of (44) and the second term in C lowDP C ˆ t C h ˆ P (2)0 ˆ t + ( h + h ) ˆ P (2)1 ˆ t + 2 h h q ˆ P (2)0 ˆ P (2)1 ˆ t + h h ˆ P (2)0 ˆ P (2)1 ˆ t + ˆ t C h ˆ P (3)0 ˆ t ! +ˆ t C h ˆ P (4)1 ˆ t ! − t C h P (2)0 t ! − t C (cid:18) h P t (cid:19) ( a ) . ˆ t h h ˆ P (2)0 ˆ P (2)1 ˆ t ! + ˆ t C h ˆ P (3)0 ˆ t ! + ˆ t C h ˆ P (4)1 ˆ t ! − (cid:18) ˆ t t + ˆ t (cid:19) ln h h ˆ P (2)0 P t ! + O (cid:18) P (cid:19) ( b ) . ˆ t h h ˆ P (2)0 P ˆ t ! + ˆ t (cid:18) h P ˆ t + ˆ t (cid:19) + ˆ t (cid:18) h P ˆ t + ˆ t (cid:19) − (cid:18) ˆ t t + ˆ t (cid:19) ln h ˆ P (2)0 t ! − (cid:18) ˆ t t + ˆ t (cid:19) ln (cid:18) h P t (cid:19) + O (cid:18) P (cid:19) ( c ) . ˆ t P ˆ P (2)0 P ! + ˆ t P ˆ P (2)0 ! + O (cid:18) P (cid:19) ( d ) . O (cid:18) P (cid:19) . (59)Here, ( a ) follows from (48), the fact that P (2)0 = ˆ P (2)0 ∼ Θ ( P ) and upper-bounding ˆ P (3)0 ≤ P , ˆ P (4)1 ≤ P ,noting the facts that ˆ P (2)0 + ˆ P (3)0 ≤ P and ˆ P (2)1 + ˆ P (4)1 = P , the functions ˆ t ln( P − x ) + ˆ t ln (cid:0) ˆ t + h x (cid:1) and DRAFT6 ˆ t ln( P − y )+ˆ t ln (cid:0) ˆ t + h y (cid:1) are maximized at x ≤ ˆ t ˆ t +ˆ t P and y ≤ ˆ t ˆ t +ˆ t P , hence, substituting ˆ P (3)0 = ˆ t ˆ t +ˆ t P and ˆ P (4)1 = ˆ t ˆ t +ˆ t P upper-bounds the expression which results in ( b ) , ( c ) follows from ˆ t , ˆ t ∼ O (cid:16) P (cid:17) , ˆ t , ˆ t ∼ . O (cid:16) P (cid:17) , and finally ( d ) follows from the fact that ˆ P (2)0 , P ∼ Θ ( P ) and also ˆ t , ˆ t ∼ O (cid:16) P (cid:17) .Noting that the second and the third cuts are the same, and using the same argument as in (59), we can boundthe difference between the terms in the third cut of (44) and the third term in C lowDP C as ˆ t C h ˆ P (1)0 ˆ t + ( h + h ) ˆ P (1)2 ˆ t + 2 h h q ˆ P (1)0 ˆ P (1)2 ˆ t + h h ˆ P (1)0 ˆ P (1)2 ˆ t +ˆ t C h ˆ P (3)0 ˆ t ! + ˆ t C h ˆ P (4)2 ˆ t ! − t C h P (1)0 t ! − t C (cid:18) h P t (cid:19) ≤ O (cid:18) P (cid:19) . (60)Observing (57), (58), (59) and (60), completes the proof of the Theorem. Theorem 9
In low SNR scenarios, assuming P = γ P , P = γ P with γ , γ constants independent of the SNR,when the power available for the source and each relay tends to zero and (cid:0) h √ γ + h √ γ (cid:1) ≤ min (cid:0) h , h (cid:1) ,the ratio of the achievable rate of the simultaneous relaying protocol based on DDF to cut-set upper bound goesto 1. In this scenario t = t = , and no private messages should be transmitted.Proof: By the same argument as in Theorem 8 and considering only the fourth cut, we obtain another upperbound on the capacity. By the following inequality ln(1 + x ) ≤ x. (61)we can bound the upper bound on the capacity as C up ≤ (cid:0) h √ γ + h √ γ (cid:1) P . (62)Now, assuming t = t = 0 , t = t = , and transmitting just the common message, we can achieve thefollowing rate C lowDDF : C lowDDF = min (cid:18) C (cid:0) h P (cid:1) , C (cid:16) h √ γ + h √ γ ) P (cid:17)(cid:19) . (63)According to the Taylor expansion of ln(1 + x ) at x = 0 , we have x − x ≤ ln (1 + x ) , (64)Hence, h P − h P , (cid:0) h √ γ + h √ γ (cid:1) P − (cid:0) h √ γ + h √ γ (cid:1) P ! ≤ C lowDDF . (65)By (62), (65), and (cid:0) h √ γ + h √ γ (cid:1) ≤ min (cid:0) h , h (cid:1) , we have lim P → C lowDDF C up → . (66) DRAFT7
V. S
IMULATION R ESULT
In this section, the achievable rate of different proposed schemes, i.e., SSRD, DPC, BME, and BME-DPC arecompared with each other and with the upper bound in different channel conditions.Figure 9 compares the achievable rate of the SSRD scheme with that of the DPC scheme for successive relayingand the DDF scheme for simultaneous relaying protocols. Here the symmetric scenario in which P = P and h = h = h = h = h = 1 is considered. The upper bound is also included in the figure.In order to satisfy the condition in Theorem 9, i.e., (cid:0) h √ γ + h √ γ (cid:1) ≤ min (cid:0) h , h (cid:1) , in Figs. 9a and b, wealso assume P = P + 10( dB ) = P + 10( dB ) and P = P + 5( dB ) = P + 5( dB ) , respectively. As the Figs. 9aand b show, SSRD achievable rate almost coincides with the upper bound over all ranges of SNR. As proved in theprevious section, in high SNR scenario, SSRD scheme coincides with DPC and the successive relaying protocolbecomes optimum, while in low SNR scenario it coincides with DDF and the simultaneous relaying protocol isoptimum.On the other hand, in Figs. 9c and d we assume that P = P = P and P = P − dB ) = P − dB ) .In this situation, the condition in Theorem 9 is no longer satisfied. Therefore, as these figures show, the ratio ofthe achievable rate of the SSRD scheme to the cut-set bound, i.e., C lowSSRD C up does not tend to one. Furthermore, theachievable rates of the SSRD, DPC, and DDF schemes coincide with each other.Figure 10 compares the achievable rate of different successive schemes with each other and the successive cut-setbound. It shows as the inter relay channel becomes stronger, BME scheme can achieve the successive cut-set bound,while the achievable rate of the DPC is independent of that channel. Furthermore, this figure indicates BME-DPCgives a better achievable rate with respect to BME with successive decoding which was proposed in [32].VI. C ONCLUSION
In this paper, we investigated the problem of cooperative strategies for a half-duplex parallel relay channel withtwo relays. We derived the optimum relay ordering and hence the asymptotic capacity of the half-duplex Gaussianparallel relay channel in low and high SNR scenarios.
Simultaneous and
Successive relaying protocols, associated with two possible relay orderings were proposed.For simultaneous relaying, each relay employs
DDF . On the other hand, for successive relaying, we proposed a
Non-Cooperative Coding scheme based on DPC and a
Cooperative Coding scheme based on BME. Moreover, acoding scheme based on the combination of DPC and BME, in which one of the relays uses DPC while the otherone employs BME was proposed. We showed that this composite scheme achieves a better rate with respect tocooperative coding based on BME with backward or successive decoding in the Gaussian case.We also proposed the SSRD scheme as a combination of the simultaneous and successive protocols based onDPC. In high SNR scenarios, we proved that our
Non-Cooperative Coding scheme based on
DPC asymptoticallyachieves the capacity. Hence, in the high SNR scenario, the optimum relay ordering is
Successive . On the other hand,in low SNR where ( h γ + h γ ) ≤ min (cid:0) h , h (cid:1) , DDF achieves the capacity. Hence, in low SNR scenarioand under the condition specified above for the channel coefficients, the optimum relay ordering is
Simultaneous . DRAFT8 −10 −5 0 5 10 1500.511.522.533.5 Relay Power (dB) R a t e ( b i t pe r d i m en s i on ) a) P =P +10 (dB)=P +10 (dB) −10 −5 0 5 10 1500.511.522.533.5 b) P =P +5 (dB)=P +5(dB)Relay Power (dB) R a t e ( b i t pe r d i m en s i on ) −20 −18 −16 −14 −12 −1000.020.040.060.080.10.120.14 Relay Power (dB) R a t e ( b i t pe r d i m en s i on ) c) P =P =P −20 −18 −16 −14 −12 −1000.010.020.030.040.05 Relay Power (dB) R a t e ( b i t pe r d i m en s i on ) d) P =P −5 (dB)=P −5 (dB)Cut−set BoundSSRDDPCDDFCut−set BoundSSRDDPCDDFCut−set BoundSSRDDPCDDF Cut−set BoundSSRDDPCDDF Fig. 9.
Rate versus relay power. A PPENDIX A Proof of Theorem 1Codebook Construction:
Let us divide time slot number b , b = 1 , , · · · , B + 1 into odd and even numbers. At odd and even time slots,source generates nr (1) AUX and nr (2) AUX sequences u (1)0 ( q ) and u (2)0 ( q ) according to Q t ni =1 p ( u (1)0 ,i ) and Q t ni =1 p ( u (2)0 ,i ) ,respectively. Then, source throws u (1)0 and u (2)0 sequences uniformly into nR (1) and nR (2) bins, respectively. Letus denote B ( w ( b ) ) and B ( w ( b ) ) as the set of sequences at the odd or even time slot that belong to the w ( b ) ’thbin, respectively (for odd time slots, w ( b ) ≤ nR (1) , and for the even time slots, w ( b ) ≤ nR (2) ).Relay 1 and relay 2 generate nR (1) and nR (2) i.i.d x (2)1 and x (1)2 sequences according to probabilities Q t ni =1 p (cid:16) x (2)1 ,i (cid:17) and Q t ni =1 p (cid:16) x (1)2 ,i (cid:17) . Furthermore, for all q and q , the source generates double indexed codebooks x (1)0 (cid:0) w ( b ) | w ( b − , q (cid:1) and x (2)0 (cid:0) w ( b ) | w ( b − , q (cid:1) according to Q t ni =1 p ( x (1)0 ,i | x (1)2 ,i , u (1)0 ,i ) and Q t ni =1 p ( x (2)0 ,i | x (2)1 ,i , u (2)0 ,i ) , respectively. Encoding:Encoding at the source:
DRAFT9 −30 −20 −10 0 10 20 30 4011.21.41.61.822.22.42.62.83 h (dB) R a t e ( b i t pe r d i m en s i on ) P = P = P = 10(dB), h = h = 1(dB), h = h = 10(dB).Successive Cut−Set BoundBME with successive decodingDPCBME−DPC Fig. 10.
Rate versus inter relay gain.
At the odd time slot b , the source intends to send the message w ( b ) to the first relay. In order to do that, sincesource knows what it has transmitted during the last time slot to the second relay, it chooses a codeword u (1)0 ( q ) such that u (1)0 ( q ) ∈ B ( w ( b ) ) and (cid:16) u (1)0 ( q ) , x (1)2 (cid:0) w ( b − (cid:1)(cid:17) ∈ A ( n ) ǫ . Such a task can be done almost surely, if r (1) AUX − R (1) ≥ t I (cid:16) U (1)0 ; X (1)2 (cid:17) (See [12]). Following that it sends x (1)0 ( u (1)0 , x (1)2 ) .At the even time slot b , the source sends the message w ( b ) to the second relay in the similar manner. Such atask can be done almost surely if r (2) AUX − R (2) ≥ t I (cid:16) U (2)0 ; X (2)1 (cid:17) . Encoding at relay 1:
At the even time slot b , relay 1 encodes w ( b − ∈ { , · · · , nR (1) } to x (2)1 (cid:0) w ( b − (cid:1) . Encoding at relay 2:
At the odd time slot b , relay 2 encodes w ( b − ∈ { , · · · , nR (2) } to x (1)2 (cid:0) w ( b − (cid:1) . Decoding:Decoding at relay 1:
At the odd time slot b , relay 1 declares ˆ w ( b ) = w ( b ) iff all the sequences u (1)0 ( q ) which are jointly typical with DRAFT0 y (1)1 belong to a unique bin B ( ˆ w ( b ) ) . Therefore, in order to make the probability of error zero, from [12], we have r (1) AUX ≤ t I (cid:16) U (1)0 ; Y (1)1 (cid:17) . (67)According to (67) and the encoding condition at source, we have R (1) ≤ t (cid:16) I ( U (1)0 ; Y (1)1 ) − I ( U (1)0 ; X (1)2 ) (cid:17) . (68) Decoding at relay 2:
At the even time slot b , relay 2 declares ˆ w ( b ) = w ( b ) iff all the sequences u (2)0 ( q ) which are jointly typical with y (2)2 belong to a unique bin B ( ˆ w ( b ) ) . Therefore, in order to make the probability of error zero, from [12], we have r (2) AUX ≤ t I (cid:16) U (2)0 ; Y (2)2 (cid:17) . (69)According to (69) and the encoding condition at source, we have R (2) ≤ t (cid:16) I ( U (2)0 ; Y (2)2 ) − I ( U (2)0 ; X (2)1 ) (cid:17) . (70) Decoding at the final destination:
At the odd time slot b , destination declares ˆ w ( b − = w ( b − iff (cid:16) x (1)2 (cid:0) ˆ w ( b − (cid:1) , y (1)3 (cid:17) ∈ A ( n ) ǫ . Hence, in orderto make the probability of error zero, from [12], we have R (1) ≤ t I ( X (1)2 ; Y (1)3 ) . (71)Similarly, at the even time slot b , we have R (2) ≤ t I ( X (2)1 ; Y (2)3 ) . (72)From the encoding at the source and (67)-(72), we obtain (9)-(11).A PPENDIX B Proof of Theorem 2Codebook Construction:
Let us divide the time slots b , b = 1 , , · · · , B + 2 into odd and even time slots. The source generates twocodebooks x (1)0 (cid:16) w ( b ) | w ( b − , s ( b − (cid:17) and x (2)0 (cid:16) w ( b ) | w ( b − , s ( b − (cid:17) of size nR (1) and nR (2) corresponding toeven and odd time slots, respectively. The first codebook is generated according to the probability p ( x (1)0 , x (1)2 , u (1)2 ) = Q t ni =1 p ( u (1)2 ,i ) p ( x (1)2 ,i | u (1)2 ,i ) p ( x (1)0 ,i | x (1)2 ,i , u (1)2 ,i ) , and the second codebook is generated according to the probability p ( x (2)0 , x (2)1 , u (2)1 ) = Q t ni =1 p ( u (2)1 ,i ) p ( x (2)1 ,i | u (2)1 ,i ) p ( x (2)0 ,i | x (2)1 ,i , u (2)1 ,i ) .On the other hand, relay 2 generates nr (1) Bin i.i.d codewords u (1)2 and nR (2) i.i.d codewords x (1)2 accordingto the probabilities p ( u (1)2 ) = Q t ni =1 p ( u (1)2 ,i ) and p ( x (1)2 | u (1)2 ) = Q t ni =1 p ( x (1)2 ,i | u (1)2 ,i ) at each odd time slotand relay 1 generates nr (2) Bin i.i.d codewords u (2)1 and nR (1) i.i.d codewords x (2)1 according to the probabilities p ( u (2)1 ) = Q t ni =1 p ( u (2)1 ,i ) and p ( x (2)1 | u (2)1 ) = Q t ni =1 p ( x (2)1 ,i | u (2)1 ,i ) at each even time slot, respectively. DRAFT1
Encoding:Encoding at the source:
At the odd time slot b , source encodes w ( b ) ∈ { , · · · , nR (1) } to x (1)0 (cid:16) w ( b ) | w ( b − , s ( b − (cid:17) and at the even timeslot b , it encodes w ( b ) ∈ { , · · · , nR (2) } to x (2)0 (cid:16) w ( b ) | w ( b − , s ( b − (cid:17) and sends them in odd and even time slots,respectively. Encoding at relay 1:
At the even time slot b , relay 1 encodes the bin index s ( b − of the message w ( b − it has received from relay 2in the previous time slot to u (2)1 (cid:16) s ( b − (cid:17) . Following that, it encodes w ( b − which was received from the sourcein time slot b − to x (2)1 (cid:16) w ( b − | s ( b − (cid:17) and sends it. Encoding at relay 2:
At the odd time slot b , relay 2 encodes the bin index s ( b − of the message w ( b − it has received from relay 1in the previous time slot to u (1)2 (cid:16) s ( b − (cid:17) . Following that, it encodes w ( b − which was received from the sourcein time slot b − to x (1)2 (cid:16) w ( b − | s ( b − (cid:17) and sends it. Decoding:Decoding at relay 1:
Knowing w ( b − and consequently s ( b − , at time slot b , relay 1 declares ( ˆ w ( b − , ˆ w ( b ) ) = ( w ( b − , w ( b ) ) iffthere exits a unique ( ˆ w ( b − , ˆ w ( b ) ) such that (cid:16) x (1)0 (cid:16) ˆ w ( b ) | ˆ w ( b − , s ( b − (cid:17) , x (1)2 (cid:16) ˆ w ( b − | s ( b − (cid:17) , u (1)2 ( s ( b − ) , y (1)1 (cid:17) ∈ A ( n ) ǫ . Hence, in order to make probability of error zero, from the Extended MAC capacity region (See [12], [24], [25],and [26]), we have R (1) ≤ t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 , U (1)2 (cid:17) , (73) R (1) + R (2) ≤ t I ( X (1)0 , X (1)2 ; Y (1)1 | U (1)2 ) . (74) Decoding at relay 2:
Knowing w ( b − and consequently s ( b − , at time slot b , relay 2 declares ( ˆ w ( b − , ˆ w ( b ) ) = ( w ( b − , w ( b ) ) iffthere exits a unique ( ˆ w ( b − , ˆ w ( b ) ) such that (cid:16) x (2)0 (cid:16) ˆ w ( b ) | ˆ w ( b − , s ( b − (cid:17) , x (2)1 (cid:16) ˆ w ( b − | s ( b − (cid:17) , u (2)1 ( s ( b − ) , y (2)2 (cid:17) ∈ A ( n ) ǫ . Hence, in order to make the probability of error zero, from Extended MAC capacity region (See [12], [24], [25],and [26]), we have R (2) ≤ t I ( X (2)0 ; Y (2)2 | X (2)1 , U (2)1 ) , (75) R (1) + R (2) ≤ t I ( X (2)0 , X (2)1 ; Y (2)2 | U (2)1 ) . (76) Decoding at the final destination:
Decoding at the final destination can be done either
Successively or Backwardly as follows.
DRAFT2
1) Successive Decoding:
At the end of odd time slot b , destination first declares the bin index ˆ s ( b − = s ( b − of the message w ( b − iffthere exists a unique ˆ s ( b − such that (cid:16) u (1)2 (ˆ s ( b − ) , y (1)3 (cid:17) ∈ A ( n ) ǫ . Hence, in order to make the probability of errorzero, from [12] we have r (1) Bin ≤ t I ( U (1)2 ; Y (1)3 ) . (77)Having decoded the bin index s ( b − of the message w ( b − , destination can resolve its uncertainty about the message w ( b − and declares ˆ w ( b − = w ( b − iff there exists a unique ˆ w ( b − such that (cid:16) x (2)1 ( ˆ w ( b − | s ( b − ) , u (2)1 ( s ( b − ) , y (2)3 (cid:17) ∈ A ( n ) ǫ . Hence, in order to make the probability of error zero, from [12] we have R (1) − r (1) Bin ≤ t I ( X (2)1 ; Y (2)3 | U (2)1 ) . (78)Using the same argument for the even time slot b , we have r (2) Bin ≤ t I ( U (2)1 ; Y (2)3 ) , (79) R (2) − r (2) Bin ≤ t I ( X (1)2 ; Y (1)3 | U (1)2 ) . (80)From (77), (78), (79), and (80), R (1) and R (2) are bounded as follows R (1) ≤ t I (cid:16) X (2)1 ; Y (2)3 | U (2)1 (cid:17) + t I (cid:16) U (1)2 ; Y (1)3 (cid:17) , (81) R (2) ≤ t I ( X (1)2 ; Y (1)3 | U (1)2 ) + t I ( U (2)1 ; Y (2)3 ) . (82)From (73)-(76), (81), and (82), the achievable rate of BME scheme based on successive decoding is equal to C lowBM succ = R (1) + R (2) ≤ max ≤ t ,t ,t + t =1 min ( (83) min (cid:16) t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 , U (1)2 (cid:17) , t I (cid:16) X (2)1 ; Y (2)3 | U (2)1 (cid:17) + t I (cid:16) U (1)2 ; Y (1)3 (cid:17)(cid:17) +min (cid:16) t I (cid:16) X (1)2 ; Y (1)3 | U (1)2 (cid:17) + t I (cid:16) U (2)1 ; Y (2)3 (cid:17) , t I (cid:16) X (2)0 ; Y (2)2 | X (2)1 , U (2)1 (cid:17)(cid:17) ,t I (cid:16) X (1)0 , X (1)2 ; Y (1)1 | U (1)2 (cid:17) , t I (cid:16) X (2)0 , X (2)1 ; Y (2)2 | U (2)1 (cid:17)(cid:17) .
2) Backward Decoding:
Following receiving the sequence corresponding to the B +2 ’th time slot, destination starts decoding the messagesin a backward manner, i.e. from w ( B ) back to w (1) . At the end of odd time slot b , knowing the value s ( b − from thereceived signal in time slot b + 1 , destination declares (cid:16) ˆ w ( b − , ˆ s ( b − (cid:17) = (cid:16) w ( b − , s ( b − (cid:17) iff there exists a uniquepair (cid:16) ˆ w ( b − , ˆ s ( b − (cid:17) such that f (2) Bin (cid:0) ˆ w ( b − (cid:1) = s ( b − and (cid:16) x (1)2 (cid:16) ˆ w ( b − , ˆ s ( b − (cid:17) , u (1)2 (cid:16) ˆ s ( b − (cid:17) , y (1)3 (cid:17) ∈ A ( n ) ǫ .Similarly, at the end of even time slot b , knowing the value s ( b − for the received signal in time slot b + 1 ,destination declares (cid:16) ˆ w ( b − , ˆ s ( b − (cid:17) = (cid:16) w ( b − , s ( b − (cid:17) iff there exists a unique pair (cid:16) ˆ w ( b − , ˆ s ( b − (cid:17) such that f (1) Bin (cid:0) ˆ w ( b − (cid:1) = s ( b − and (cid:16) x (2)1 (cid:16) ˆ w ( b − , ˆ s ( b − (cid:17) , u (2)1 (cid:16) ˆ s ( b − (cid:17) , y (2)3 (cid:17) ∈ A ( n ) ǫ . Hence, in order to make the DRAFT3 probability of error zero, from [12] we have r (1) Bin ≤ R (1) , (84) r (2) Bin ≤ R (2) , (85) R (2) − r (2) Bin ≤ t I (cid:16) X (1)2 ; Y (1)3 | U (1)2 (cid:17) , (86) R (2) − r (2) Bin + r (1) Bin ≤ t I (cid:16) X (1)2 , U (1)2 ; Y (1)3 (cid:17) , (87) R (1) − r (1) Bin ≤ t I (cid:16) X (2)1 ; Y (2)3 | U (2)1 (cid:17) , (88) R (1) − r (1) Bin + r (2) Bin ≤ t I (cid:16) X (2)1 , U (2)1 ; Y (2)3 (cid:17) . (89)Hence, by employing BME and Backward decoding, the following rate is achievable subject to (73)-(76) and(84)-(89) constraints. C lowBME back = R (1) + R (2) . (90) Optimum input distributions
Now, we prove there exists input probability distributions ( p ( x (1)0 , x (1)2 , u (1)2 ) and p ( x (2)0 , x (2)1 , u (2)1 ) ) which max-imize (90) and have the following property: u (1)2 is independent from ( x (1)0 , x (1)2 ) and u (2)1 is independent from ( x (2)0 , x (2)1 ) . To prove this, consider p ( x (1)0 , x (1)2 , u (1)2 ) and p ( x (2)0 , x (2)1 , u (2)1 ) along with t , t which maximize (90)subject to the required constraints. Let us define ˆ p ( x (1)0 , x (1)2 , u (1)2 ) and ˆ p ( x (2)0 , x (2)1 , u (2)1 ) as ˆ p ( x (1)0 , x (1)2 , u (1)2 ) = p ( u (1)2 ) p ( x (1)0 , x (1)2 ) , (91) ˆ p ( x (2)0 , x (2)1 , u (2)1 ) = p ( u (2)1 ) p ( x (2)0 , x (2)1 ) , (92)Now, we show that ˆ p ( x (1)0 , x (1)2 , u (1)2 ) and ˆ p ( x (2)0 , x (2)1 , u (2)1 ) along with t , t achieve at least the same rate as theoptimum one. Let us denote the values of mutual information and entropy with respect to the input distributions p, ˆ p by I p , H p and I ˆ p , H ˆ p , respectively. The right-hand sides of (86)-(89) with respect to p can be upper-boundedby the ones corresponding to ˆ p as follows t I p (cid:16) X (1)2 ; Y (1)3 | U (1)2 (cid:17) ( a ) ≤ t I p (cid:16) X (1)2 ; Y (1)3 (cid:17) = t I ˆ p (cid:16) X (1)2 ; Y (1)3 (cid:17) , (93) t I p (cid:16) X (1)2 , U (1)2 ; Y (1)3 (cid:17) ( a ) = t I p (cid:16) X (1)2 ; Y (1)3 (cid:17) = t I ˆ p (cid:16) X (1)2 ; Y (1)3 (cid:17) , (94) t I p (cid:16) X (2)1 ; Y (2)3 | U (2)1 (cid:17) ( b ) ≤ t I p (cid:16) X (2)1 ; Y (2)3 (cid:17) = t I ˆ p (cid:16) X (2)1 ; Y (2)3 (cid:17) , (95) t I p (cid:16) X (2)1 , U (2)1 ; Y (2)3 (cid:17) ( b ) = t I p (cid:16) X (2)1 ; Y (2)3 (cid:17) = t I ˆ p (cid:16) X (2)1 ; Y (2)3 (cid:17) . (96)where ( a ) follows from the fact that U (1)2 −→ X (1)2 −→ Y (1)3 forms a Markov chain and ( b ) follows from the factthat U (2)1 −→ X (2)1 −→ Y (2)3 forms a Markov chain. Moreover as in distribution ˆ p , u (1)2 and u (2)1 are independentfrom ( x (1)0 , x (1)2 ) and ( x (2)0 , x (2)1 ) , it can be easily verified that the right-hand sides of (93)-(96) are equal to theright-hand sides of (86)-(89) with the input distribution ˆ p , respectively. Hence, by utilizing ˆ p instead of p , the region DRAFT4 that satisfies (86)-(89) is enlarged. Now, let us consider the right-hand sides of (73)-(76). t I p (cid:16) X (1)0 ; Y (1)1 | X (1)2 , U (1)2 (cid:17) ( a ) ≤ t I p (cid:16) X (1)0 ; Y (1)1 | X (1)2 (cid:17) = t I ˆ p (cid:16) X (1)0 ; Y (1)1 | X (1)2 (cid:17) (97) t I p (cid:16) X (1)0 , X (1)2 ; Y (1)1 | U (1)2 (cid:17) ( a ) ≤ t I p (cid:16) X (1)0 , X (1)2 ; Y (1)1 (cid:17) = t I ˆ p (cid:16) X (1)0 , X (1)2 ; Y (1)1 (cid:17) (98) t I p (cid:16) X (2)0 ; Y (2)2 | X (2)1 , U (2)1 (cid:17) ( b ) ≤ t I p (cid:16) X (2)0 ; Y (2)2 | X (2)1 (cid:17) = t I ˆ p (cid:16) X (2)0 ; Y (2)2 | X (2)1 (cid:17) (99) t I p (cid:16) X (2)0 , X (2)1 ; Y (2)2 | U (2)1 (cid:17) ( b ) ≤ t I p (cid:16) X (2)0 , X (2)1 ; Y (2)2 (cid:17) = t I ˆ p (cid:16) X (2)0 , X (2)1 ; Y (2)2 (cid:17) (100)where ( a ) follows from the fact that U (1)2 −→ ( X (1)2 , X (1)0 ) −→ Y (1)1 form a Markov chain and ( b ) follows fromthe fact that U (2)1 −→ ( X (2)1 , X (2)0 ) −→ Y (2)2 form a Markov chain. Similarly, we observe that the right-hand sidesof (97)-(100) represent the right-hand sides of inequalities (73)-(76) with the input distribution ˆ p . Hence, the regionof ( R (1) , R (2) ) that satisfies (73)-(76) and (84)-(89) is enlarged by utilizing the input distribution ˆ p instead of p .This proves the independency of input distributions with u (1) and u (2) in the optimum distribution. Simplifying the achievable rate
As we can assume that the input distributions are of the form (91) and (92), the achievable rate can be simplifiedas follows. C lowBME back = R (1) + R (2) ≤ max ≤ t ,t ,t + t =1 min (cid:16) t I (cid:16) X (1)0 , X (1)2 ; Y (1)1 (cid:17) , t I (cid:16) X (2)0 , X (2)1 ; Y (2)2 (cid:17)(cid:17) , (101)subject to r (1) Bin ≤ R (1) , (102) r (2) Bin ≤ R (2) , (103) R (1) ≤ t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 (cid:17) , (104) R (2) ≤ t I (cid:16) X (2)0 ; Y (2)2 | X (2)1 (cid:17) , (105) R (2) − r (2) Bin + r (1) Bin ≤ t I (cid:16) X (1)2 ; Y (1)3 (cid:17) , (106) R (1) − r (1) Bin + r (2) Bin ≤ t I (cid:16) X (2)1 ; Y (2)3 (cid:17) . (107)with input distributions p ( x (1)0 , x (1)2 ) = p ( x (1)2 ) p ( x (1)0 | x (1)2 ) ,p ( x (2)0 , x (2)1 ) = p ( x (2)1 ) p ( x (2)0 | x (2)1 ) . DRAFT5
Now, we show that (101)-(107) is equivalent to C lowBME back ≤ max ≤ t ,t ,t + t =1 min (cid:16) t I (cid:16) X (1)0 , X (1)2 ; Y (1)1 (cid:17) , t I (cid:16) X (2)0 , X (2)1 ; Y (2)2 (cid:17) ,t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 (cid:17) + t I (cid:16) X (2)0 ; Y (2)2 | X (2)1 (cid:17) ,t I (cid:16) X (1)2 ; Y (1)3 (cid:17) + t I (cid:16) X (2)1 ; Y (2)3 (cid:17)(cid:17) . (108)First, it is easy to verify that (101)-(107) imply (108). Now, in order to prove that the converse is also true, weshow that for every possible rate r satisfying (108), there exists a quad-tupple (cid:16) R (1) , R (2) , r (1) Bin , r (2)
Bin (cid:17) such that R (1) + R (2) = r , (cid:16) R (1) , R (2) , r (1) Bin , r (2)
Bin (cid:17) satisfies (101)-(107), and moreover at least one of bin rates is equal tozero, i.e. r (1) Bin = 0 or r (2) Bin = 0 .Let us define R (1) , min (cid:16) r, t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 (cid:17)(cid:17) , R (2) , r − R (1) . As r satisfies (108), we concludethat ( R (1) , R (2) ) satisfies (101), (104), and (105). Furthermore, as R (1) + R (2) = r ≤ t I (cid:16) X (1)2 ; Y (1)3 (cid:17) + t I (cid:16) X (2)1 ; Y (2)3 (cid:17) , we conclude that either R (1) ≤ t I (cid:16) X (2)1 ; Y (2)3 (cid:17) or R (2) ≤ t I (cid:16) X (1)2 ; Y (1)3 (cid:17) . For the sakeof symmetry, let us assume that the first case has occurred, i.e. R (1) ≤ t I (cid:16) X (2)1 ; Y (2)3 (cid:17) . Now, we define r (1) Bin , and r (2) Bin , max (cid:16) , R (2) − t I (cid:16) X (1)2 ; Y (1)3 (cid:17)(cid:17) . Obviously, (102), (103), and (106) are valid. Considering (107),we have R (1) − r (1) Bin + r (2) Bin = R (1) + max (cid:16) , r − R (1) − t I (cid:16) X (1)2 ; Y (1)3 (cid:17)(cid:17) ( a ) ≤ t I (cid:16) X (2)1 ; Y (2)3 (cid:17) (109)where ( a ) follows from the facts that r ≤ t I (cid:16) X (1)2 ; Y (1)3 (cid:17) + t I (cid:16) X (2)1 ; Y (2)3 (cid:17) and R (1) ≤ t I (cid:16) X (2)1 ; Y (2)3 (cid:17) .Hence, (107) is also valid. The second case in which R (2) ≤ t I (cid:16) X (1)2 ; Y (1)3 (cid:17) can be dealt with in a similarmanner.Hence, from the above argument, the achievable rate of BME scheme with backward decoding can be simplifiedas follows: C lowBME back ≤ max ≤ t ,t ,t + t =1 min (cid:16) t I (cid:16) X (1)0 , X (1)2 ; Y (1)1 (cid:17) , t I (cid:16) X (2)0 , X (2)1 ; Y (2)2 (cid:17) ,t I (cid:16) X (1)0 ; Y (1)1 | X (1)2 (cid:17) + t I (cid:16) X (2)0 ; Y (2)2 | X (2)1 (cid:17) ,t I (cid:16) X (1)2 ; Y (1)3 (cid:17) + t I (cid:16) X (2)1 ; Y (2)3 (cid:17)(cid:17) , (110)with probabilities p ( x (1)0 , x (1)2 ) = p ( x (1)2 ) p ( x (1)0 | x (1)2 ) ,p ( x (2)0 , x (2)1 ) = p ( x (2)1 ) p ( x (2)0 | x (2)1 ) . R EFERENCES[1] E. C. van-der Meulen, “Three-terminal communication channels,”
Adv. Appl. Prob., vol. 3, pp. 120-154, 1971.[2] T. M. Cover and A. El Gamal, “Capacity Theorems for the Relay Channel,”
IEEE Transactions on Information Theory,
Vol. 25, No. 5,pp. 572-584, September 1979.[3] B. Schein and R. G. Gallager, “The Gaussian parallel relay network,” in Proc IEEE Int. Symp. Information Theory,
Sorrento, Italy, Jun.2000, p. 22.
DRAFT6 [4] B. E. Schein, Distributed coordination in network information theory, in
Ph.D thesis, Massachusetts Institute of Technology,
Sept. 2001.[5] L. L. Xie and P. R. Kumar, “An achievable rate for the multiple-level relay channel,”
IEEE Transactions on Information Theory,
Vol. 51,No. 4, pp. 13481358, Apr. 2005.[6] M. Gastpar, G. Kramer, and P. Gupta, “The multiple-relay channel: Coding and antenna-clustering capacity,” in
Proc. of InternationalSymposium on Information Theory (ISIT02),
June 2002.[7] G. Kramer, M. Gastpar, and P. Gupta, “Cooperative Strategies and Capacity Theorems for Relay Networks,”
IEEE Transactions onInformation Theory,
Vol. 51, Issue 9, pp. 3037-3063, Sept. 2005.[8] G. Kramer, M. Gastpar, and P. Gupta, “Capacity theorems for wireless relay channels,” in
Proc. of the Allerton Conference onCommunications, Control and Computing, Monticello, IL,
Oct. 2003.[9] A. Sanderovich, S. Shamai, Y. Steinberg, and G. Kramer, “Communication via decentralized processing,” in
Proc. of InternationalSymposium on Information Theory (ISIT05),
Sept. 2005.[10] Ivana Maric, Roy D. Yates, “Forwarding Strategies for Gaussian Parallel-Relay Networks,” in
Proc. 38th Annu. Conf. Information Sciencesand Systems (CISS’04), Princeton, NJ,
Mar. 2004.[11] P. Razaghi and W. Yu, “Parity forwarding for multiple-relay networks,” in
Proc. of International Symposium on Information Theory(ISIT06),
July 2006, pp. 16781682.[12] T. Cover, and J. Thomas, Elements of Information Theory, Wiley, New York, first edition, 1991.[13] M.A. Khojastepour, A. Sabharwal, B. Aazhang, “On Capacity of Gaussian Cheap Relay Channel,”
IEEE 2003 Global CommunicationsConference (Globecom-2003) , December 1-5, San Francisco, CA, 2003.[14] M.A. Khojastepour, B. Aazhang, “Cheap Relay Channels: A Unifying Approach to Time and Frequency Division Relaying,” Forty-secondAnnual Allerton Conference on Communication, Control, and Computing, September 29-October 1, 2004.[15] S. Zahedi, and A. El Gamal, “Minimum energy communication over a relay channel,” in
Proc. of International Symposium on InformationTheory (ISIT03), pp. 344, 29 June-4 July 2003.[16] S. Zahedi, M. Mohseni, and A. El Gamal, “On the capacity of AWGN relay channels with linear relaying functions,” in
Proc. of InternationalSymposium on Information Theory (ISIT04),
27 June-2 July 2004.[17] A. E. Gamal, and S. Zahedi, “Capacity of a class of relay channels with orthogonal components,”
IEEE Transactions on InformationTheory,
Vol. 51, Issue 5, pp. 1815-1817, May 2005.[18] Anders Host-Madsen, and Junshan Zhang, “Capacity Bounds and Power Allocation for Wireless Relay Channels,”
IEEE Transactions onInformation Theory,
Vol. 51, Issue 6, pp. 2020-2040, June 2005.[19] Y. Liang and V. V. Veeravalli, “Gaussian orthogonal relay channels:Optimal resource allocation and capacity,”
IEEE Transactions onInformation Theory,
Vol. 51, No. 9, pp. 3284-3289, September 2005.[20] L. L. Xie and P. R. Kumar, “A Network Information Theory for Wireless Communication: Scaling Laws and Optimal Operation,”
IEEETransactions on Information Theory,
Vol. IT-50, No. 5, pp. 748-767, May 2004.[21] P. Gupta and P. R. Kumar, “Toward an information theory of large networks: An achievable rate region,”
IEEE Transactions on InformationTheory,
Vol. 49, No. 8, pp. 18771894, Aug. 2003.[22] M. Gastpar, To code or not to code, in
Ph.D. Thesis, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland,
Nov. 2002.[23] M. Gastpar and M. Vetterli, “On asymptotic capacity of gaussian relay networks,” in
Proc. of International Symposium on InformationTheory(ISIT02),
June 2002.[24] D. Slepian and J. K. Wolf, “A coding theorem for multiple access channels with correlated sources,” Bell Syst. Tech. J., vol. 52, pp.1037-1076, 1973.[25] T. S. Han, “The capacity region of a genera1 multiple-access channel with certain correlated sources,”
Inform. Contr. , vol. 40, no. I, pp.37-60, 1979.[26] V. Prelov. “Transmission over a multiple-access channel with a special source hierarchy,”
Problemy Peredachi Informatsii , 20:3-10, 1984.English translation pp. 233-9, 1985.[27] Mohammad Ali Khojastepour, Ashutosh Sabharwal and Behnaam Aazhang, “Bounds on Achievable Rates for General Multi-terminalNetworks with Practical Constraints,” in
Proc. of Information Processing in Sensor Networks: Second International Workshop , IPSN2003, Palo Alto, CA, USA, April 22-23, 2003.[28] Max H. M. Costa, “Writing on dirty paper,”
IEEE Transactions on Information Theory, vol. 29, no. 3, pp. 439–441, May 1983.
DRAFT7 [29] B. Rankov and A. Wittneben, “Spectral efficient protocols for nonregenerative half-duplex relaying,” in in
Proc. of 43rd Allerton Conferenceon Communication, Control and Computing,
Monticello (IL), USA, October 2005.[30] B. Rankov and A. Wittneben, “Spectral efficient signaling for half-duplex relay channels,” in
Proc. of Asilomar Conf. Signals, syst., comput.,Pacific Grove,
CA, Nov.,2005.[31] Feng Xue, and Sumeet Sandhu, “Cooperation in a Half-Duplex Gaussian Diamond Relay Channel,”
IEEE Transactions on InformationTheory,
Vol. 53, Issue 10, pp. 3806-3814, Oct. 2007.[32] Woohyuk Chang, Sae-Young Chung, and Yong H. Lee, “Capacity Bounds for Alternating Two-Path Relay Channels,” in
Proc. of theAllerton Conference on Communications, Control and Computing, Monticello, IL,
Oct. 2007.[33] Yang Sheng, J.-C. Belfiore, “Towards the Optimal Amplify-and-Forward Cooperative Diversity Scheme,”
IEEE Transactions on InformationTheory,
Vol. 53, Issue 9, pp. 3114-3126, Sept. 2007.[34] K. Azarian, H. El Gamal, and Ph. Schniter, “On the Achievable Diversity-Multiplexing Tradeoff in Half-Duplex Cooperative Channels,”
IEEE Trans. Inform. Theory, vol. 51, no. 12, pp. 41524172, Dec. 2005.[35] P. Mitran, H. Ochiai, and V. Tarokh, “Space-time diversity enhancements using collaborative communications,”
IEEE Trans. Inform. Theory, vol. 51, no. 6, pp. 20412057, June 2005.[36] Y. Fan, C. Wang, J. S. Thompson and H. V. Poor, “Recovering multiplexing loss through successive relaying using simple repetitioncoding,”
IEEE Transactions on Wireless Communications,