The Gaussian Many-to-One Interference Channel with Confidential Messages
aa r X i v : . [ c s . I T ] M a y The Gaussian Many-to-One Interference Channelwith Confidential Messages
Xiang He Aylin Yener
Wireless Communications and Networking LaboratoryElectrical Engineering DepartmentThe Pennsylvania State University, University Park, PA 16802 [email protected] [email protected]
Abstract —We investigate the K -user many-to-one interferencechannel with confidential messages in which the K th userexperiences interference from all other K − users, and is atthe same time treated as an eavesdropper to all the messages ofthese users. We derive achievable rates and an upper bound onthe sum rate for this channel and show that the gap betweenthe achievable sum rate and its upper bound is log ( K − bits per channel use under very strong interference, when theinterfering users have equal power constraints and interferinglink channel gains. The main contributions of this work are:(i) nested lattice codes are shown to provide secrecy wheninterference is present, (ii) a secrecy sum rate upper bound isfound for strong interference regime and (iii) it is proved thatunder very strong interference and a symmetric setting, the gapbetween the achievable sum rate and the upper bound is constantwith respect to transmission powers. I. I
NTRODUCTION
In a wireless environment, interference is always present.Traditionally, interference is viewed as a harmful physicalphenomenon that should be avoided. Yet, from the secrecyperspective, if interference is more harmful to an eavesdropper,it can be a resource to protect confidential messages. To fullyappreciate and evaluate the potential benefit of interferenceon secrecy, the fundamental model to study is the interferencechannel with confidential messages. Indeed, this model withtwo users has been studied extensively up to date, e.g., [1]–[6].The case with more than two users, by comparison, is notwell explored. Difficulties in solving the K-user case, K ≥ ,exist in both the achievability and the converse. For achiev-ability, there is no known scheme for the strong interferenceregime. The strong interference scenario is usually dismissedfor the two-user case since the achievable secrecy rates aremuch smaller than those achievable under weak interferenceregime [2]. In contrast, the K-user strong interference caseis quite different, because the K − interfering users canin fact protect each other in the strong interference regimeand a substantial amount of secrecy rate can be achieved. Theconventional wisdom says in strong interference, the receivershould remove the interference before decoding the intendedmessage. Yet, in secrecy problems, we have to face thequestion on how to remove the interference when the receiveris not supposed to decode the interference. This problem isaddressed in [7] for the case where all links are i.i.d. fadingunder a continuous distribution, and interference alignment in temporal domain leads to achievable rates. Yet, if the channelis static, this method is not applicable and new methods areneeded.In this aspect, progress in interference channel withoutsecrecy constraint points to the use of lattice codes, whichis essentially interference alignment in signal space. Thisapproach allows decoding the sum of interference withoutknowing each component in it. Notable results include [8]where lattice codes are used for interference alignment fora many-to-one Gaussian interference channel. The same ideaalso applies to a fully connected interference channel [9], [10].In this work, we focus on the Gaussian many-to-one inter-ference channel first studied in [8], in an effort to investigatethe effects of interference in the context of secrecy. We uselattice codes to achieve secrecy for this model and use the toolfirst introduced in [11] which computes secrecy rates when thelattice code has a nested structure [12]. Notably, the structureof the lattice we use differs from that used in interferencechannels without secrecy constraints [8]–[10], and accordinglyso does its error probability analysis [12].For the converse, known results are limited to the casewhere the eavesdropper observes a weaker channel than thelegitimate receiver [3], [4]. The upper bound from [1] isgeneral, yet is difficult to evaluate for the Gaussian case dueto the presence of the auxiliary random variables. While theupper bound in [2] is applicable to the strong interference case,it is shown therein to be quite loose for strong interference,mainly because the genie information used in deriving theupper bound provides too much information to the legitimatereceiver. Another contribution of this work is providing a goodsum rate upper bound for the many-to-one interference channelunder strong interference.Under very strong interference, we show that the gapbetween our upper bound and our achievable sum rate is log ( K − bits under certain uniform interference conditions.We observe that in this setting, for fixed transmission power P , the cost of secrecy constraints per user diminishes whenthe number of users K → ∞ . This means that as the numberof users gets large, the secrecy constraints induce a negligiblerate penalty for each user, i.e., secrecy comes for free.The following notation is used throughout the paper: C ( x ) = 0 . (1 + x ) . A ,...,K represents the set { A , A , ..., A K } . ⊕ P ni =1 A i is used as a shorthand for ⊕ A ... ⊕ A n , and R sum for P Ki =1 R i .II. P RELIMINARIES
In this section, we provide the preliminaries related to nestedlattice codes, which will be useful in providing the achievablerates in Section IV. Let Λ denote a lattice in R N [12], i.e., a setof points which is a group closed under real vector addition.The modulus operation x mod Λ is defined as x mod Λ = x − arg min y ∈ Λ d ( x, y ) , where d ( x, y ) is the Euclidean distancebetween x and y . The fundamental region of a lattice V (Λ) isdefined as the set { x : x mod Λ = x } .Let t , t , ..., t K be K numbers taken from V (Λ) . Then, wehave the following representation theorem : Theorem 1: K P k =1 t k is uniquely determined by { T, K P k =1 t k mod Λ } , where T is an integer such that ≤ T ≤ K N . Remark 1:
The theorem is a purely algebraic result anddoes not rely on the statistics of t ,...K . The case with K = 2 was proved in [11]. The proof here is similar and is henceomitted due to the space limit. For K = 2 , theorem 1 impliesthat modulus operation looses at most one bit per dimensionof information if t , t ∈ V .III. S YSTEM M ODEL Y X X X Z Z Z S S S D D D W W W ˆ W ˆ W ˆ W √ a √ a Y Y Fig. 1. Many-to-one Gaussian interference channel. number of users K=3
We consider the many-to-one Gaussian interference channel[8] in Figure 1. The average power constraint for node S i is P i . Z i , i = 1 , ..., K are independent Gaussian random variableswith zero mean and unit variance. The channel gain of thelink between S i and D i is unity. The channel gain between S i and D K is √ a i .Node S i sends a message W i to node D i , while keeping itsecret from the other receivers. Hence, for W ,...,K − , node D K is viewed as an eavesdropper. Let the signal received by D K over n channel uses be Y nK . The corresponding secrecyconstraint is given by: lim n →∞ n H (cid:0) W , ,...,K − | Y NK (cid:1) = lim n →∞ n H ( W , ,...,K − ) (1) IV. A CHIEVABLE R ATES
Without loss of generality, we assume there is a j , such that a j P j ≤ a i P i , ∀ i (2) Theorem 2:
Let K ≥ . Define P min =min { P , ..., P K − } . If a j > max (cid:26)(cid:18) P K + 1 P j (cid:19) (cid:18) K − K − P min (cid:19) , P K + 1 P j (cid:27) (3)Then the following sum secrecy rate is achievable R sum = [( K − R min − log ( K − + + R K (4)where R min = C ( P min ) , R K = C ( P K ) . Proof:
Let (Λ , Λ c ) denote a nested lattice structure in R N , where Λ c is the coarse lattice.Node S i , i = 1 , ..., K constructs its input to the channelover N channel uses, X Ni , as follows: The code book hasrate R i and is composed of points t i ∈ Λ i ∩ V (Λ c,i ) . The first K − users use the same lattice. Hence we require R i ≡ R , Λ i ≡ Λ , Λ c,i ≡ Λ c for i = 1 ...K − . Let d i be the ditheringnoise, which is uniformly distributed over V (Λ c,i ) . We assumethe lattice is scaled properly such that N R x ∈V (Λ c,i ) dx Z x ∈V (Λ c,i ) k d i k dx = 1 (5)Let P = a j P j , where j is defined in (2). Define x ⊕ y as x ⊕ y = ( x + y ) mod Λ c . Further, define U Ni and X Ni as: U Ni = t Ni ⊕ d Ni , i = 1 , ..., K − (6) U NK = ( t NK + d NK ) mod Λ c,K (7) X Ni = √ P √ a i U Ni , i = 1 , ..., K − , X NK = p P K U NK (8)In order for D i , i = 1 , ..., K − to correctly decode t i , basedon [12, Theorem 5], the probability of decoding error will goto zero as N → ∞ , if R ≤ C ( P i ) , i = 1 , ..., K − (9)The signal received by D K over N channel uses is given by Y NK = √ P ( K − X i =1 U Ni ) + p P K U NK + Z NK (10)Node D K decodes the interference first: It selects a constant α and computes ˆ Y NK as shown below [12]: Let γ = p P K /P .Let Z ′ NK = Z NK / √ P . ˆ Y NK =( α √ P Y NK − K − X i =1 d Ni ) mod Λ c (11) =( α ( K − X i =1 U Ni + γU NK + Z ′ NK ) − K − X i =1 d Ni ) mod Λ c (12) =( K − X i =1 t Ni + ( α − K − X i =1 U Ni + α ( γU NK + Z ′ NK ))) mod Λ c (13) is chosen so that the variance of the effective noise term Z Neff = ( α −
1) ( K − X i =1 U Ni ) + α ( γU NK + Z ′ NK ) (14)per dimension is minimized. Under the optimal α , the effectivenoise variance is P X P N P X + P N , where P X = γ + P = P K +1 P . P N = K − .Clearly the effective noise Z Neff is not Gaussian. However, U Ni can be approximated with a Gaussian distribution asshown below [12, (200)]: f U Ni ( x ) ≤ e Nε (Λ c,i ) f O Ni ( x ) (15)where O i , i = 1 , ..., K, ∼ N (0 , σ i I ) , where σ i is the av-erage power per dimension of a random variable uniformlydistributed over the smallest ball covering V (Λ c,i ) . ε (Λ c,i ) isdefined as [12, (67)]: ε (Λ c,i ) = log (cid:18) R u,i R l,i (cid:19) + 12 log 2 πeG ∗ N + 1 N (16)where R u,i , R l,i are the covering radius and effective radiusof Λ c,i respectively. G ∗ N is the normalized average power of N -sphere and converges to πe as N → ∞ . The lattice isdesigned to be good for covering. Hence R u,i R l,i → as N →∞ . σ i is bounded below [12, Lemma 6] : NN + 2 ≤ σ i ≤ (cid:18) R u,i R l,i (cid:19) (17)Note that this approximation property in (15) is invariant underscaling. This means for any c > , we have: f cU Ni (cid:0) x N (cid:1) ≤ e Nε (Λ c,i ) f cO Ni (cid:0) x N (cid:1) (18)In addition, for any two independent random variables U N , U N that have the approximation property given by (15),the probability density distribution of their sum can be ap-proximated as f U N + U N (cid:0) x N (cid:1) ≤ e Nε (Λ c, )+ Nε (Λ c, ) f O N + O N (cid:0) x N (cid:1) (19)Define ˜ Z N as ˜ Z N = (1 − α )( K − X i =1 O Ni ) + α ( γO NK + Z ′ NK ) (20)Based on the two properties described above, we find theeffective noise can be approximated by ˜ Z N as follows: f Z Neff ( x ) ≤ e ( K − Nε (Λ c )+ Nε (Λ c,K ) f ˜ Z N ( x ) (21)Node D K attempts to decode ⊕ P K − i =1 t i . The approxima-tion in (21) enables us to apply the analysis in [12, Theorem 5],that the probability of decoding error will go to as N → ∞ when R ≤ . P X P N P X + P N ! = 0 . (cid:18) K − PP K + 1 (cid:19) (22) The P X P N P X + P N in [12, Lemma 6] corresponds to the average power perdimension of U i here. After subtracting the interference, the remainder of the inter-ference signal is γU NK ⊕ Z ′ NK (23)We next show if P K + 1 < P (24)then this signal can be approximated by γU NK + Z ′ NK (25)by which we mean: lim N →∞ Pr(( γU NK ⊕ Z ′ NK ) = ( γU NK + Z ′ NK )) = 0 (26)As N → ∞ , γU NK + Z ′ NK can be approximated by γO NK + Z ′ NK , such that Pr (cid:0) γU NK + Z ′ NK / ∈ V (Λ c ) (cid:1) ≤ e Nε (Λ c,K ) Pr (cid:0) γO NK + Z ′ NK / ∈ V (Λ c ) (cid:1) (27)Let µ = γ +1 /P = PP K +1 . Because the shaping lattice isPoltyrev-good [12], if µ > , we have Pr (cid:0) γO NK + Z ′ NK / ∈ V (Λ c ) (cid:1) ≤ e − N ( E P ( µ ) − o N (1)) (28)where E P ( µ ) is the Poltyrev exponent defined in [12, (56)].Since E p ( µ ) is positive for µ > , we have the approximationgiven in (25). Node D K then tries to decode t K from (25).Based on [12, Theorem 5], the probability of decoding errorwill go to zero as N → ∞ , if R K < C ( P K ) (29)In summary, there are three types of error events at thedestination:1) E : D K incorrectly decodes the modulus sum of theinterference.2) E : E does not occur; and (25) does not equal (23).3) E : E , E do not occur; and D K incorrectly decodesthe lattice point t NK after subtracting the interference.If (22), (24), (29) hold, then lim N →∞ Pr [ i =1 E i ! = lim N →∞ X i =1 Pr ( E i ) = 0 (30)Also (9) must be met in order for t i to be correctly decodedat D i , i = 1 , ..., K .We next bound the mutual information leaked to the eaves-dropper as follows. H ( t N ,...,K − | Y NK , d Ni , i = 1 , .., K ) (31) ≥ H ( t N ,...,K − | Y NK , X NK , Z NK , d Ni , i = 1 , .., K ) (32) = H ( t N ,...,K − | K − X i =1 U Ni , d Ni , i = 1 , .., K ) (33)et T is the integer in Theorem 1, which is used to recover P K − i =1 U Ni from ⊕ P K − i =1 U Ni . ≤ T ≤ ( K − N . Then(33) becomes: H ( t N ,...,K − | ⊕ K − X i =1 U Ni , T, d Ni , i = 1 , .., K ) (34) = H ( t N ,...,K − | ⊕ K − X i =1 t Ni , T ) (35) ≥ H ( t N ,...,K − | ⊕ K − X i =1 t Ni ) − H ( T ) (36)The first term in (36) can be bounded as follows: H ( t N ,...,K − | ⊕ K − X i =1 t Ni ) = K − X j =1 H ( t Nj | t N ,...,j − , ⊕ K − X i =1 t Ni ) (37) = K − X j =1 H ( t Nj | ⊕ K − X i = j t Ni ) = K − X j =1 H ( t Nj ) = ( K − N R (38)Hence the mutual information leaked to the eavesdropper isbounded as: I (cid:0) t N ,...,K − ; Y NK , d Ni , i = 1 , ..., K (cid:1) ≤ N ( R +log ( K − With this preparation, we can now derive the secrecy rate.We notice that when (9), (22), (24), (29) hold, node D K candecode the modulus sum of the interference, and then decode t K . Hence the channel can be viewed as composed of twoparts: one part is a direct link from S K to D K . The otherpart is the orthogonal MAC wire-tap channel considered in[4], where the main channel is composed of K − orthogonalcomponents, and the eavesdropper observes a MAC channel.The signal received by the eavesdropper is the interferencereceived by D K . The difference is that this MAC wire-tapchannel has discrete inputs t N , ..., t NK − . Each channel use inthis new channel corresponds to N channel uses of the originalchannel. Following a similar argument in [13], for this equiv-alent channel, the following secrecy rate ( R ,e , ..., R K − ,e ) isachievable: ≤ R i,e ≤ H ( t Ni ) − R i,x , R i,x ≥ , i = 1 , ..., K − (39) K − X i =1 R i,x = I (cid:0) t N ,...,K − ; Y NK , d Ni , i = 1 , ..., K (cid:1) (40)Finally, it can also be verified that (3) holds, (24) is fulfilledand (22) is looser than (9) and hence becomes redundant.Under (3), R = C ( P min ) , i = 1 , ..., K − . The result inthe theorem follows by choosing R i,x as R i,x = NK − C ( P min ) + log ( K − (41) Remark 2:
When using nested lattice codes to this interfer-ence channel, we had to overcome two difficulties: (1) Theerror probability analysis in [12] requires the noise to beGaussian, while in an interference channel, the interference plus noise is in general non-Gaussian. We managed to getaround this via the property that a good lattice code, afterdithering, “looks like” Gaussian noise [12]. (2) In the decoderof a nested lattice code, a nonlinear modulus operation [12]must be applied to the received signal. This operation causesdistortion to the signal even after the decoded part of thesignal is subtracted out and renders the use of layered encodingand decoding in [10] not straightforward. This is resolved byproving that the probability of having distortion in fact goesto as N → ∞ .V. U PPER B OUND ON THE S ECRECY S UM R ATE
Assume a i ≥ , i = 1 ...K − . Let n be the total numberof channel uses. Define V n as: V n = P K − i =1 √ a i X ni + Z nK .Then we have the following lemma: Lemma 1: nR sum ≤ I (cid:0) W ,...,K − ; Y n ,...,K − (cid:1) − I ( W ,...,K − ; V n )+ I (cid:0) X nK ; Y nK | X n ,...,K − (cid:1) + nε (42)where lim n →∞ ε = 0 . Proof Outline:
The two user case ( K = 2 ) has beenproved in [3, Appendix]. The same technique is used here toprove Lemma 1. The derivation starts from [3, (41)], W beingreplaced by W ,...,K − , Y being replaced by Y ,...,K − , X being replaced by X ,...,K − , Y being replaced by Y K . The V n therein is replaced by V n . Then, we can prove nR sum − nε ≤ I (cid:0) W ,...,K − ; Y n ,...,K − (cid:1) − I ( W ,...,K − ; V n )+ I ( W ,...,K − ; V n | Y nK ) + I ( X nK ; Y nK ) (43)It can then be shown, following a similar derivation to [3,Appendix (46)-(57)], that I ( W ,...,K − ; V n | Y nK ) + I ( X nK ; Y nK ) ≤ I (cid:0) X nK ; Y nK | X n ,...,K − (cid:1) (44)Hence we have (42).Let ˜ V n = P K − i =1 p a i c X ni + q c Z nK + q − c ˜ Z nK , where c = max { a i , i = 1 , ..., K − } . ˜ Z nK is a length- n vector thathas the same distribution as Z nK but is independent from Z nK .Then we have the following lemma: Lemma 2: R sum ≤ lim n →∞ n ( K − X i =1 I ( X ni ; Y ni ) − I ( X n ,...,K − ; ˜ V n ))+ lim n →∞ n I ( X nK ; Y nK | X n ,...,K − ) (45) Proof Outline:
Because ˜ V n is a degraded version of V n ,from Lemma 1 and data processing inequality, we have nR sum ≤ I (cid:0) W ,...,K − ; Y n ,...,K − (cid:1) − I (cid:16) W ,...,K − ; ˜ V n (cid:17) + I (cid:0) X nK ; Y nK | X n ,...,K − (cid:1) + nε (46)where lim n →∞ ε = 0 . Next, we extend the derivation in [4,(58),(65)-(68)] to the first two terms, by replacing Y n with n ,...,K − . The derivation in [4, (58),(65)-(68)] correspondsto the case of K − here. It is important to note that ˜ V n is not the signal received by the eavesdropper. Hence thechannel is not equivalent to the channel considered in [4],which has different secrecy constraints. However, as we haveshown above, the derivation in [4, (58),(65)-(68)] does notinvoke any secrecy constraint. Hence these steps can still beapplied here and we have the lemma. Theorem 3:
When a i ≥ , i = 1 ...K − , the sum secrecyrate is upper bounded by R sum ≤ K X i =1 C ( P i ) − C P K − i =1 a i P i ( K − c ! (47)where c = max { a i , i = 1 ...K − } . Proof Outline:
The theorem follows by evaluating thebound in Lemma 2. This is done by extending [4, Theorem4]. [4, Theorem 4] corresponds to the case with K − .Let h i = a i /c, i = 1 , ..., K − . Then it can be shown thatthe first limit in (45) is upper bounded by K − X i =1 C ( P i ) + C P K − i =1 h i P i K − ! (48)The main technique is the generalized entropy power in-equality [14]. Since no secrecy constraint is invoked in itsderivation, its result is still applicable here. This, along withthe fact that I (cid:0) X nK ; Y nK | X n ,...,K − (cid:1) ≤ nC ( P K ) , gives us theresult in the theorem.VI. C OMPARISON OF THE A CHIEVABLE R ATE AND THE U PPER B OUND
When a i = a, P i = P min , i = 1 ...K − , and the conditionon a given by (3) is fulfilled, the achievable secrecy sum rate,given by Theorem 2, becomes R asum = [( K − C ( P min ) − log ( K − + + C ( P K ) (49)The upper bound on the secrecy sum rate, given by Theorem3 becomes R ubsum = ( K − C ( P min ) + C ( P K ) (50)It is easy to see that the gap between upper bound and lowerbound is at most log ( K − bits per channel use.The cost in rate, paid by first each K − users, followingfrom (41), is K − ( C ( P min ) + log ( K − . We see that,for fixed P min , this rate loss goes to as K → ∞ . Thisobservation is demonstrated in Figure 2.VII. C ONCLUSION
In this work, we have derived achievable secrecy rates for K( K ≥ ) user Gaussian many-to-one interference channel, andan upper bound on the secrecy sum rate. The achievabilitytechnique is general and applies to the full connected K -user interference channel as well [15]. The converse utilizesa combination of techniques in [3], [4]. Although both tech-niques were designed for weak interference, we show theircombination provides a good sum rate upper bound for thestrong interference case.
10 20 30 40 50 60 70 80 90 1000.20.40.60.811.21.41.61.8 number of users K b i t s pe r c hanne l u s e C(P min ) Secrecy rate of each of the first K−1 users
Fig. 2. Rate penalty paid for secrecy per user reduces as the number of users K increases. P min = 10 . R EFERENCES[1] R. Liu, I. Maric, P. Spasojevic, and R. D. Yates. Discrete Memory-less Interference and Broadcast Channels with Confidential Messages:Secrecy Rate Regions.
IEEE Transactions on Information Theory ,54(6):2493–2507, June 2008.[2] X. Tang, R. Liu, P. Spasojevic, and H.V. Poor. Interference-AssistedSecret Communication.
IEEE Information Theory Workshop , May 2008.[3] Z. Li, R. D. Yates, and W. Trappe. Secrecy Capacity Region of a Classof One-Sided Interference Channel.
IEEE International Symposium onInformation Theory , July 2008.[4] E. Ekrem and S. Ulukus. On the Secrecy of Multiple Access WiretapChannel.
Allerton Conf. on Communication, Control, and Computing ,September 2008.[5] R. D. Yates, D. Tse, and Z. Li. Secure Communication on InterferenceChannels.
IEEE International Symposium on Information Theory , July2008.[6] Y. Liang, A. Somekh-Baruch, H. V. Poor, S. Shamai, and S. Verdu.Cognitive Interference Channels with Confidential Messages. Submittedto IEEE Transactions on Information Theory, December, 2007.[7] O. Koyluoglu, H. El-Gamal, L. Lai, and H. V. Poor. InterferenceAlignment for Secrecy. submited to IEEE Transactions on InformationTheory, October, 2008.[8] G. Bresler, A. Parekh, and D. Tse. the Approximate Capacity of theMany-to-one and One-to-many Gaussian Interference Channels.
AllertonConf. on Communication, Control, and Computing , September 2007.[9] S. Sridharan, A. Jafarian, S. Vishwanath, and S. A. Jafar. Capacity ofSymmetric K-User Gaussian Very Strong Interference Channels.
IEEEGlobal Telecommunication Conf. , November 2008.[10] S. Sridharan, A. Jafarian, S. Vishwanath, S. A. Jafar, and S. Shamai.A Layered Lattice Coding Scheme for a Class of Three User GaussianInterference Channels.
Allerton Conf. on Communication, Control, andComputing , September 2008.[11] X. He and A. Yener. Providing Secrecy with Lattice Codes.
AllertonConf. on Communication, Control, and Computing , September 2008.[12] U. Erez and R. Zamir. Achieving 1/2 log (1+ SNR) on the AWGNChannel with Lattice Encoding and Decoding.
IEEE Transactions onInformation Theory , 50(10):2293–2314, October 2004.[13] E. Tekin and A. Yener. The General Gaussian Multiple Access and Two-Way Wire-Tap Channels: Achievable Rates and Cooperative Jamming.
IEEE Transactions on Information Theory , 54(6):2735–2751, June 2008.[14] M. Madiman and A. Barron. Generalized Entropy Power Inequalitiesand Monotonicity Properties of Information.