An Upper Bound on the Capacity of Vector Dirty Paper with Unknown Spin and Stretch
AAn Upper Bound on the Capacity ofVector Dirty Paper with Unknown Spin and Stretch
David T.H. Kao
Cornell University – Ithaca, NY [email protected]
Ashutosh Sabharwal
Rice University – Houston, TX [email protected]
Abstract —Dirty paper codes are a powerful tool for combatingknown interference. However, there is a significant differencebetween knowing the transmitted interference sequence andknowing the received interference sequence, especially when thechannel modifying the interference is uncertain.We present an upper bound on the capacity of a compoundvector dirty paper channel where although an additive Gaussiansequence is known to the transmitter, the channel matrix betweenthe interferer and receiver is uncertain but known to lie withina bounded set. Our bound is tighter than previous bounds inthe low-SIR regime for the scalar version of the compound dirtypaper channel and employs a construction that focuses on therelationship between the dimension of the message-bearing signaland the dimension of the additive state sequence. Additionally, abound on the high-SNR behavior of the system is established.
I. I
NTRODUCTION
The benefit of transmitter side information has been studiedin many forms. The case of causal state information for thediscrete memoryless channel was first studied in [1], andnoncausal state information was subsequently considered in[2], [3]. The noncausal case was later specialized to the point-to-point AWGN channel with additive Gaussian state in [4],the so-called dirty paper channel, wherein it was shown that,surprisingly, interference perfectly known at the transmittermay be completely mitigated through a clever binning scheme.This dirty paper coding (DPC) approach is especially ap-plicable to certain scenarios in multiuser wireless communica-tions: In the vector Gaussian broadcast channel, DPC enablesa capacity achieving encoding scheme [5], [6] which binseach successive receiver’s message against interference frommessages intended for preceding receivers. In the cognitiveinterference channel, DPC is a useful tool for exploitingcognitive knowledge to mitigate interference [7], [8].A known limitation of DPC is its reliance on exact knowl-edge of channel state. Unfortunately, in large and distributedwireless networks, providing transmitters with full channelstate knowledge incurs high overhead. Hence, often in practice,channel state is known with some uncertainty. In this paperwe study how such channel state uncertainty inhibits theusefulness of transmitter side information. Specifically, westudy a compound vector dirty paper channel, where sideinformation about an additive vector Gaussian interferencesequence is provided to the transmitter. Channel uncertainty is
This work has been supported in part by NSF CNS award 1012921, andwas completed while David T.H. Kao was at Rice University. modeled by a set of possible channel matrices that transformthe interference sequence prior to reception. The set studiedcontains all matrices with singular values less than a param-eter a max , representing a known maximum amplification. Thetransmitter, unaware of the exact channel state, must reliablyconvey a message to the receiver.Our compound formulation captures the subtle but impor-tant distinction between noncausal knowledge of transmittedinterference and noncausal knowledge of received interference.With our model, we may better understand, for example,cognitive interference channels where the interference channelgains are unmeasured [9], as well as MIMO broadcast whereelements of the channel matrix are unknown [10].Compound versions of transmitter side information chan-nels have been examined previously, with the most generalformulation being [11]. In the work of [12], a more precisemodel with a finite number of compound states was studiedand an approach termed “carbon copying” was defined. Furtherextensions of [12] to specific Gaussian channels may be foundin [9] where phase was uncertain, and [13] where the message-bearing and interfering signals scaled proportionally.The main result of this paper is an upper bound on capacityof dirty paper channels when the interference signal mayhave both dimension greater than one and the potential toundergo a wide range of amplification. For scalar dirty paper,our bound is tighter than known bounds when maximumpotential amplification of interference is high, i.e., the lowsignal-to-interference ratio ( SIR ) regime. For vector dirtypaper in the low-
SIR regime, we find that uncertainty incursan approximate prelog loss in capacity. When the unknownamplification is unbounded, the loss is exact and signifies aprelog capacity loss at all finite signal to noise ratios (
SNR ) .A degrees of freedom ( DOF ) upper bound also results, thusproviding insight into the high-
SNR behavior of the system.Additionally, to our knowledge, this work represents the firsttreatment of compound channels for vector dirty paper, and thefocus on vector channels reveals a relationship between thedimension of the interference, the dimension of the messagebearing signal, and the resulting upper bound on
DOF .The paper is structured as follows. After defining our modelin Section II, we state and prove our upper bound in itsmost general form in Section III. Section IV presents thebound applied to the more concrete cases of MISO and SIMOchannels, and includes a comparison to bounds previously a r X i v : . [ c s . I T ] M a y s A ∈ A H × × + z y Fig. 1. Channel Model. An interference sequence is known to the transmitter,however the linear transformation of the interference prior to reception isknown only to lie within some set. applied to the scalar compound dirty paper channel. Wecomment on high-
SNR behavior of the system in Section V,and summarize in Section VI.II. P
ROBLEM S TATEMENT
We consider the channel depicted by Figure 1 with input-output relationship characterized by y [ t ] = Hx [ t ] + As [ t ] + z [ t ] , (1)where x [ t ] and y [ t ] represent channel input and output respec-tively of a vector channel at time index t , and s [ t ] and z [ t ] are zero-mean Gaussian random vectors i.i.d. across time withcovariance matrices Q s , assumed to be full rank, and Q z = I respectively. The dimension of x , s , z , and y are M t , M s , M r ,and M r respectively. An M t × M r matrix H and an M s × M r matrix A are both assumed to be quasistatic in the sensethat for any length- n codeword they are constant. We use theexponent n , e.g., x n (cid:44) ( x [1] , x [2] , . . . , x [ n ]) , to denote n usesof the channel. On the input covariance matrix Q x (cid:44) E (cid:2) xx † (cid:3) ,we impose the average power constraint tr( Q x ) ≤ P .The transmitter is given the additive vector state (interfer-ence) sequence s n noncausally, but knows only that A lieswithin an uncertainty set, A ∈ A ⊆ R M s × M r ( C M s × M r forthe complex channel). The uncertainty set A is defined as theset of all matrices with largest singular value bounded aboveby a known maximum amplification parameter a max ∈ [0 , ∞ ] .Notice that a max = ∞ implies A = R M s × M r . Furthermore,the set A as defined is symmetric ( A ∈ A implies − A ∈ A ),convex, and compact. When |A| = 1 , we have exactly thecanonical vector dirty paper channel.Our analyses apply to both real- and complex-valued chan-nels, and we note differences in assuming one or the otheronly as needed. For notation, we use boldfaced lowercaseto represent vectors, boldfaced uppercase for matrices, † todenote Hermitian transpose, and all logarithms are base-2.III. U PPER B OUND ON C APACITY
Theorem 1 (Upper Bound) . Define M (cid:44) rank( HQ x H † ) and a matrix U that projects the received signal onto thesubspace spanned by HQ x H † . The capacity, C , is boundedabove by C given in (2), where κ = for real channels and κ = 1 for complex channels, and supremum and infimum are subject to the constraints Q x (cid:23) Q x ) ≤ P A i ∈ A ∀ i ∈ (cid:26) , . . . , (cid:24) M s M (cid:25)(cid:27) A i Q s A † j = 0 ∀ i (cid:54) = j. Before presenting the proof, we point out that the receivedsignal dimension is a chosen integer, which affects the denom-inator of (2). Consequently, the objective function of (2) maybe discontinuous with respect to Q x , implying that convexmethods may not suffice to solve (2) and that the maximinformulation is not necessarily interchangeable with a minimaxformulation [14]. Discontinuities also necessitate the use ofsupremum and infimum operations. Proof:
The proof begins in a manner similar to [12] and[9], however differs in emphasis of potential state gain values,and is extended to vector state through an inductive argument.We assume real-valued channels for the presentation below.We first fix the choice of Q x and notice that at most (cid:106) M s M (cid:107) rank- M matrices, { A i } i =1 ,..., (cid:106) MsM (cid:107) , may be chosensuch that A i Q s A † j = 0 . If M s is not evenly divisible by M ,to collection { A i } i =1 ,..., (cid:106) MsM (cid:107) we add a final matrix A (cid:108) MsM (cid:109) ,which incorporates the remaining independent dimensions of s . In the following, we denote the message as W , the channeloutput given state transformation matrix A as y A , and theprojected channel output as v A (cid:44) Uy A . Noting that A i ∈ A implies − A i ∈ A , we begin from Fano’s inequality: nr ≤ min A ∈A I ( W ; y n A ) ( a ) ≤ min A ∈A (cid:48) I ( W ; v n A ) ( b ) ≤ I ( W ; v n ) + (cid:108) MsM (cid:109) (cid:88) i =1 I ( W ; v n − A i ) + I ( W ; v n A i )2 (cid:108) M s M (cid:109) + 2= 2 h ( v n ) + (cid:108) MsM (cid:109) (cid:88) i =1 h ( v n − A i ) + h ( v n A i )2 (cid:108) M s M (cid:109) + 2 − h ( v n | W ) + (cid:108) MsM (cid:109) (cid:88) i =1 h ( v n − A i | W ) + h ( v n A i | W )2 (cid:108) M s M (cid:109) + 2 , (4)where in step (a) we chose a reduced uncertainty set A (cid:48) = (cid:26) − A (cid:108) MsM (cid:109) , . . . , − A , , A , . . . , A (cid:108) MsM (cid:109) (cid:27) , and project onto the subspace containing the message, andin (b) we note that the arithmetic mean is greater than theminimum of a set. (cid:44) sup Q x inf { A i } κ (cid:108) MsM (cid:109) − (cid:88) i =1 log det ( UHQ x H † U † + I + UA i Q s A † i U † ) det ( UA i Q s A † i U † ) + log det (cid:0) I + UHQ x H † U † (cid:1) + g (cid:18) Q x , A (cid:108) MsM (cid:109) (cid:19)(cid:108) M s M (cid:109) + 1 (2) g ( Q x , A ) = log det ( UHQ x H † U † + I + UAQ s A † U † ) det( UAQ s A † U † ) if (cid:108) M s M (cid:109) = M s M log det ( UHQ x H † U † + I + UAQ s A † U † ) det ( UAQ s A † U † + I ) + 2 M else . (3)First, we bound the unconditioned entropy terms of antipo-dal state channel matrices: h ( v n − A i ) + h ( v n A i ) ( a ) ≤ n (cid:88) t =1
12 log det (cid:16) I + UHQ x [ t ] H † U † + Ψ t + UA i Q s A † i U † (cid:17) + 12 log det (cid:16) I + UHQ x [ t ] H † U † − Ψ t + UA i Q s A † i U † (cid:17) + nM log(2 πe ) ( b ) ≤ n log det (cid:16) I + UHQ x H † U † + UA i Q s A † i U † (cid:17) + nM log(2 πe ) , (5)where (a) uses maximum entropy principles and expansion ofcovariance terms, (b) uses the concavity of log-determinant,and Q x [ t ] and Ψ t (cid:44) E[ UHx [ t ] s † A † i U † + UA i sx [ t ] † H † U † ] denote input covariance and cross correlation of t -th channeluse respectively. Additionally, we note h ( v n ) ≤ n (cid:0) I + UHQ x H † U † (cid:1) + nM πe ) . (6)We lower bound terms conditioned on the message W : h ( v n | W ) + (cid:108) MsM (cid:109) (cid:88) i =1 h ( v n A i | W ) ≥ h ( v n , v n A , . . . , v n A (cid:100) MsM (cid:101)| W ) ( a ) = h (cid:18) v n A + v n √ , v n A − v n √ , v n A , . . . , v n A (cid:100) MsM (cid:101) (cid:12)(cid:12)(cid:12)(cid:12) W (cid:19) = h (cid:18) v n A + v n √ , v n A , . . . , v n A (cid:100) MsM (cid:101) (cid:12)(cid:12)(cid:12)(cid:12) v n A − v n √ , W (cid:19) + h (cid:18) v n A − v n √ | W (cid:19) ( b ) = h (cid:18) v n A + v n √ , v n A , . . . , v n A (cid:100) MsM (cid:101) (cid:12)(cid:12)(cid:12)(cid:12) UA s n √ , W (cid:19) + h (cid:18) UA s n √ (cid:19) ( c ) ≥ h ( v n , v n A , . . . , v n A (cid:100) MsM (cid:101)| W )+ n (cid:16) UA Q s A † U † (cid:17) + nM πe ) , (7) where (a) results from a basis transformation, (b) resultsfrom perfectly correlated noise terms between the two channeloutputs and independence between message and state, and (c)results from factorization of matched scaling constants. Theanalysis for (7) is repeated inductively to arrive at h ( v n | W ) + (cid:108) MsM (cid:109) (cid:88) i =1 h ( v n A i | W ) ≥ h ( v n , v n A (cid:100) MsM (cid:101)| W ) + nM (cid:16)(cid:108) M s M (cid:109) − (cid:17) πe )+ (cid:108) MsM (cid:109) − (cid:88) i =1 n (cid:16) UA i Q s A † i U † (cid:17) . If M s is evenly divisible by M the same induction may beapplied to decouple the final two potential channel outputs: h ( v n , v n A (cid:100) MsM (cid:101)| W ) ≥ nM πe ) + n (cid:18) UA (cid:108) MsM (cid:109) Q s A † (cid:108) MsM (cid:109) U † (cid:19) . (8)If M s is not evenly divisible by M , the matrix UA (cid:108) MsM (cid:109) Q s A † (cid:108) MsM (cid:109) U † is rank deficient, and thus we onlypartially correlate the two noise terms z (cid:108) MsM (cid:109) and z n : h ( v n , v n A (cid:100) MsM (cid:101)| W )= h v n A (cid:100) MsM (cid:101) + v n √ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) U √ (cid:18) A (cid:108) MsM (cid:109) s n + z (cid:108) MsM (cid:109) − z n (cid:19) , W + h (cid:18) U √ (cid:18) A (cid:108) MsM (cid:109) s n + z (cid:108) MsM (cid:109) − z n (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) W (cid:19) ≥ h (cid:18) U √ (cid:18) z (cid:108) MsM (cid:109) + z n (cid:19)(cid:19) + h (cid:18) U √ (cid:18) A (cid:108) MsM (cid:109) s n + z (cid:108) MsM (cid:109) − z n (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) W (cid:19) ≥ nM πe ) + n (cid:18) UA (cid:108) MsM (cid:109) Q s A † (cid:108) MsM (cid:109) U † + 12 I (cid:19) . (9)An identical analysis may be performed for h ( v n | W ) + (cid:80) (cid:108) MsM (cid:109) i =1 h ( v n − A i | W ) , and by substituting (5), (6), (8), and9) for both positive and negative A i into (4) and allowingminimization over collections of { A i } , we arrive at (2). Remark 1:
Although the optimization in (2) is potentiallydifficult to compute, a simpler bound can be arrived at whichchooses as each A i a matrix that aligns eigenvectors of Q s with eigenvectors of HQ x H † . Remark 2:
For the case where both the channel input andinterference are scalars, the bound of [9] provided evidencethat, with unknown phase, correlation between the input andstate provided no benefit, and comparison between a zeroadditive state with a high-variance additive state is a specialcase studied in [12]. Incorporating both of these emphases intoa single analysis provides new insight into how the unknownphase and unknown amplitude jointly reduce capacity.
Remark 3:
The primary innovation in the construction of (2)is emphasis of the effect that the dimension of interferencemay have on the prelog factor of capacity. In particular, iflarge but finite interference power is considered, i.e., P (cid:28) a max < ∞ , the terms in the sum of (2) tend towards zerosignifying an approximate prelog capacity loss. The errorin this approximation depends primarily on the SIR of thesystem. If a max = ∞ , i.e. the set A = R M s × M r , thenthe prelog loss becomes exact and the system exhibits whatresembles a degrees of freedom loss at all SNR . Remark 4:
The choice of correlation of noise terms in (9) isnot optimized, and thus the bound is potentially loose. Thisoptimization however depends on Q s and choice of { A i } which in turn depends on choice of Q x . As discussed priorto presentation of the proof, it not immediately apparent howthese choices interact to tighten or loosen the bound.IV. MISO & SIMO C HANNELS WITH V ECTOR S TATE
The question of the dimension of the received messagebearing signal Hx , and subsequent optimization of the inputcovariance Q x prevents a more explicit characterization of (2)in general. On the other hand, if the signal is necessarily one-dimensional (e.g., when either the transmitter or receiver in awireless communication link has one antenna) the upper boundon capacity may be simplified considerably: Corollary 2.
Let v si denote the eigenvalues of Q s , and h the channel vector modifying x . The capacity, C , is boundedabove by C given by C = κ (cid:34) M s (cid:88) i =1 log (cid:16) (cid:107) h (cid:107) P +1+ a max v si a max v si (cid:17) + log (cid:0) (cid:107) h (cid:107) P (cid:1)(cid:35) M s + 1 , (10) where κ = for real channels and κ = 1 for complexchannels.Sketch of Proof: The full proof is omitted due to limitedspace, however it relies only on beamforming (transmit orreceive) for the message-bearing signal and a sequence of A i matrices that project individual eigenvectors of Q s onto thesubspace containing the message-bearing signal. −
10 0 10 20 30 400246 INR max (dB) r ( b it s ) Our upper boundUpper bound from [9]Interference-free upper boundInterference as noise1/2 IF reference
Fig. 2. Comparison of upper and lower bounds for the compound dirty paperchannel with real scalar input and interference (
SNR =
15 dB). The dottedtrace represents half the interference-free rate and is included as reference; itdoes not represent an achievable scheme.
Remark 5:
The point made in Remark 3 regarding approxima-tion of the bound with a prelog factor is more clearly illustratedin rank-1 channels. If for example min i v si ≥ (cid:107) h (cid:107) Pa max , (11)then each log term in the sum of (10) is boundedabove by 1, and the gap between the approximation (cid:101) C (cid:44) κM s +1 log(1 + (cid:107) h (cid:107) P ) and the actual bound is less than κM s M s +1 bits. Remark 6:
One special case is the scalar channel with scalaradditive state whose bound is shown in Figure 2 relative toprior work. Notice our bound is tighter at high
INR (low
SIR ),and complements the result from [9]. The approximate prelogloss is illustrated as well: for the scalar case, M s = 1 so theprelog loss is approximately .V. H IGH - SNR B EHAVIOR
The behavior of wireless systems at high signal to noiseratios often provides insight into the spatial interaction ofsignals. One standard metric for high-
SNR performance is themultiplexing gain or degrees of freedom (
DOF ) defined as
DOF (cid:44) lim
SNR →∞ C (SNR) κ log(1 + SNR) , (12)where SNR = P , and κ = or κ = 1 for real or complexchannels respectively. In our system, an upper bound on DOF may be deduced directly from (2). The form of the boundis contingent on how the
INR , or equivalently the spectralproperties of state sequence s n , scales with SNR : orollary 3. The system has full ( M (cid:63) (cid:44) min { M t , M r } )degrees of freedom if and only if both of the followingconditions hold: The parameter a max is finite. The interference power, or equivalently
INR , growssublinearly with respect to
SNR .If either condition does not hold then the degrees of freedomof the system is bounded above by
DOF ≤ M (cid:63) (cid:0)(cid:6) M s M (cid:63) (cid:7) + 1 (cid:1) − M s (cid:6) M s M (cid:63) (cid:7) + 1 . (13) Proof:
If both conditions hold, then the interference maybe treated as noise and at high
SNR the gap between the rateachieved and the interference free rate approaches a constant.On the other hand, if the first condition is false, then, as notedin Remark 2, the terms in the summation of (2) evaluate to0. Therefore this proof focuses on the case where only thesecond condition is false.If
INR grows linearly with
SNR , then there must exist somefinite scalar β such that HQ x H † (cid:22) β AQ s A † , (14)for all Q x where tr( Q x ) ≤ P . Consequently, the terms inthe summation of (2) may be bounded by a constant whichhas vanishing contribution when normalized by log(1 + P ) as P → ∞ . If INR grows superlinearly with respect to
SNR ,then a function β ( P ) → as P → ∞ suffices to satisfy (14),and the the terms in the summation of (2) vanish as P → ∞ .By counting the number of dimensions of interferencerelative to message bearing signal, the asymptotic behaviorof the remaining two terms in the numerator of (2) results inthe DOF upper bound for fixed M DOF ≤ M (cid:16)(cid:108) M s M (cid:109) + 1 (cid:17) − M s (cid:108) M s M (cid:109) + 1 , which is maximized when M = M (cid:63) . Remark 7:
It is important to note cases where the twoconditions posed in Corollary 3 hold in the context of commonwireless network applications of DPC. With respect to thefirst condition, often the known sequence represents encodedmessages intended for other receivers that interfere with theDPC transmission. In these cases, perhaps when the
INR ishigh enough, i.e., the singular values of A are above a max ,the interference may be decoded and the nature of the systemchanges. Alternatively, the vector s might represent multipleknown sequences whose exact linear transformation at thereceiver is unknown, but whose magnitude may be boundedbased on a measurement of aggregate INR .For the second condition to hold, we must assume that thenature of increased
SNR is a result of increased transmssionpower at the transmitter rather than a decrease in thermal noiseat the receiver.
Remark 8:
Unlike the finite-
SNR behavior, the
DOF lossexhibited when Condition 2 is false is not confined to anyspecific
SIR regime. Even if a max is small, if the INR scaleslinearly with
SNR then the statement holds. Moreover, theexact covariance structure is less relevant at high-
SNR thanthe dimension or rank of Q s .VI. S UMMARY
In this paper, we studied a compound channel model forvector dirty paper where the linear transformation spinning andstretching the dirty paper is unknown. We presented an upperbound on the capacity of this compound vector dirty paperchannel. Our bound is tighter than previous bounds in thelow-
SIR regime for the case of scalar input and interference,and extends intuitions regarding prelog loss in capacity to thevector dirty paper channel. The bound offers insight into thehigh-
SNR behavior of systems modeled by vector dirty paper,and a relationship between the dimension of the message-bearing signal, the dimension of the interference, and thedegrees of freedom of the system was revealed.R
EFERENCES[1] C. E. Shannon, “Channels with side information at the transmitter,”
IBMJournal of Research and Development , vol. 2, no. 4, pp. 289 –293, Oct.1958.[2] S. I. Gel’fand and M. S. Pinsker, “Coding for channels with randomparameters,”
Probl. Contr. and Inf. Theory , vol. 9, no. 1, pp. 19–31,1980.[3] C. Heegard and A. Gamal, “On the capacity of computer memory withdefects,”
Information Theory, IEEE Transactions on , vol. 29, no. 5, pp.731 – 739, Sep. 1983.[4] M. Costa, “Writing on dirty paper (corresp.),”
Information Theory, IEEETransactions on , vol. 29, no. 3, pp. 439 – 441, May 1983.[5] G. Caire and S. Shamai, “On the achievable throughput of a multiantennagaussian broadcast channel,”
Information Theory, IEEE Transactions on ,vol. 49, no. 7, pp. 1691 – 1706, July 2003.[6] H. Weingarten, Y. Steinberg, and S. Shamai, “The capacity region of thegaussian multiple-input multiple-output broadcast channel,”
InformationTheory, IEEE Transactions on , vol. 52, no. 9, pp. 3936 –3964, Sep.2006.[7] N. Devroye, P. Mitran, and V. Tarokh, “Achievable rates in cognitiveradio channels,”
Information Theory, IEEE Transactions on , vol. 52,no. 5, pp. 1813 – 1827, May 2006.[8] S. Rini, D. Tuninetti, and N. Devroye, “Inner and outer bounds forthe gaussian cognitive interference channel and new capacity results,”
Information Theory, IEEE Transactions on , vol. 58, no. 2, pp. 820 –848,Feb. 2012.[9] P. Grover and A. Sahai, “On the need for knowledge of the phase inexploiting known primary transmissions,” in
New Frontiers in DynamicSpectrum Access Networks, 2007. DySPAN 2007. 2nd IEEE Interna-tional Symposium on , April 2007, pp. 462 –471.[10] D. T. Kao and A. Sabharwal, “Node cooperation with local views inthe two-user interference channel,” in
Signals, Systems and Computers(ASILOMAR), 2012 Conference Record of the Forty Sixth AsilomarConference on , 2012, pp. 1748–1752.[11] P. Mitran, N. Devroye, and V. Tarokh, “On compound channels with sideinformation at the transmitter,”
Information Theory, IEEE Transactionson , vol. 52, no. 4, pp. 1745 – 1755, April 2006.[12] A. Khisti, U. Erez, A. Lapidoth, and G. Wornell, “Carbon copying ontodirty paper,”
Information Theory, IEEE Transactions on , vol. 53, no. 5,pp. 1814 –1827, May 2007.[13] W. Zhang, S. Kotagiri, and J. N. Laneman, “Writing on dirty paper withresizing and its application to quasi-static fading broadcast channels,”in
Information Theory, 2007. ISIT 2007. IEEE International Symposiumon , June 2007, pp. 381 –385.[14] K. Fan, “On a theorem of Weyl concerning the eigenvalues of lineartransformations II,”