Dual Link Algorithm for the Weighted Sum Rate Maximization in MIMO Interference Channels
aa r X i v : . [ c s . I T ] A p r Dual Link Algorithm for the Weighted Sum RateMaximization in MIMO Interference Channels
Xing Li , Seungil You , Lijun Chen , An Liu , Youjian (Eugene) Liu Department of Electrical, Computer, and Energy Engineering, University of Colorado at Boulder Department of Computing and Mathematical Sciences, California Institute of Technology Department of Computer Science, University of Colorado at Boulder Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology
Abstract
MIMO interference network optimization is important for increasingly crowded wireless communication net-works. We provide a new algorithm, named Dual Link algorithm, for the classic problem of weighted sum-ratemaximization for MIMO multiaccess channels (MAC), broadcast channels (BC), and general MIMO interferencechannels with Gaussian input and a total power constraint. For MIMO MAC/BC, the algorithm finds optimalsignals to achieve the capacity region boundary. For interference channels with Gaussian input assumption, two ofthe previous state-of-the-art algorithms are the WMMSE algorithm and the polite water-filling (PWF) algorithm. TheWMMSE algorithm is provably convergent, while the PWF algorithm takes the advantage of the optimal transmitsignal structure and converges the fastest in most situations but is not guaranteed to converge in all situations. Itis highly desirable to design an algorithm that has the advantages of both algorithms. The dual link algorithm issuch an algorithm. Its fast and guaranteed convergence is important to distributed implementation and time varyingchannels. In addition, the technique and a scaling invariance property used in the convergence proof may findapplications in other non-convex problems in communication networks.
Index Terms
MIMO, Interference Network, Weighted Sum-rate Maximization, Duality, Scaling Invariance, Optimization
This work was supported in part by NSF grants ECCS-1408604, IIP-1414250.
I. I
NTRODUCTION
One of the main approaches to accommodating the explosive growth in mobile data is to reduce the cellsize and increase the base station or access point density, while all cells reuse the same frequency spectrum.However, the inter-cell interference becomes severe because the probability of line of sight increases ascell size shrinks. On the other hand, the situation is not hopeless. As promised by interference alignmentthrough joint transmit signal design, every user can have half of the bandwidth at infinite SNR [1].Consequently, joint transmit signal design algorithms are expected to be employed to manage interference,or equivalently, maximize data rate at practical SNR, and asymptotically achieve interference alignment.The main hurdle to joint transmit signal design is the collection of global channel state information (CSI)and coordination/feedback overhead.In this paper, we design a new algorithm, named Dual Link algorithm, that jointly optimizes thecovariance matrices of transmit signals of multiple transmitters in order to maximize the weighted sum-rate of the data links. The algorithm is ideally suited for distributed implementation where only localchannel state information is needed. The algorithm works for the MIMO B-MAC networks and assumesGaussian transmit signal. The MIMO B-MAC network model [10] includes broadcast channel (BC),multiaccess channel (MAC), interference channels, X networks, and many practical wireless networks asspecial cases. The weighted sum-rate maximization can be used for other utility optimization by findingappropriate weights and thus is a classic problem to solve. The problem is non-convex, and variousalgorithms have been proposed for various cases, e.g., [3]–[6], [10]–[12], [16]–[20]. Among the previousstate-of-the-art algorithms, we have proposed the polite water-filling (PWF) algorithm [10]. Because ittakes advantage of the optimal transmit signal structure for an achievable rate region, the polite water-fillingstructure, the PWF algorithm has the lowest complexity and the fastest convergence when it converges.However, in some strong interference cases, it has small oscillation. Another excellent algorithm is theWMMSE algorithm in [12]. It was proposed for beamforming matrix design for the MIMO interferingbroadcast channels but could be readily applied to the more general B-MAC networks and input covariancematrix design. It transforms the weighted sum-rate maximization into an equivalent weighted sum meansquare error minimization problem, which has three sets of variables and is convex when any two variablesets are fixed. With the block coordinate optimization technique, the WMMSE algorithm is guaranteed toconverge to a stationary point, though the convergence is observed in simulations to be slower than thePWF algorithm.
It is thus highly desirable to have an algorithm with the advantages of both PWF and WMMSEalgorithms, i.e., fast convergence by taking advantage of the optimal transmit signal structure and provableconvergence for the general interference network. The main contribution of this paper is such an algo-rithm, the dual link algorithm. It exploits the forward-reverse link rate duality in a new way. Numericalexperiments demonstrate that the dual link algorithm is almost as fast as the PWF algorithm and can bea few iterations or more than ten iterations faster than the WMMSE algorithm, depending on the desiredaccuracy with respect to the local optimum. Note that being faster even by a couple iterations will becritical in distributed implementation in time division duplex (TDD) networks with time varying channels,where the overhead of each iteration costs significant signaling resources between the transmitters andthe receivers. The faster the convergence is, the faster channel variations can be accommodated by thealgorithm. Indeed, the dual link algorithm is highly scalable and suitable for distributed implementationbecause, for each data link, only its own channel state and the aggregated interference plus noise covarianceneed to be estimated no matter how many interferers are there. We also show that the dual link algorithmcan be easily modified to deal with systems with colored noise.Another contribution of this paper is the proof of the monotonic convergence of the algorithm. It usesonly very general convex analysis, as well as a particular scaling invariance property that we identify forthe weighted sum-rate maximization problem. We expect that the scaling invariance holds for and ourproof technique applies to many non-convex problems in communication networks.The centralized version of dual link algorithm for total power constraint has been generalized to multiplelinear constraints using a minimax approach [2], and has stimulated the design of another monotonicconvergent algorithm based on convex-concave procedure [14] which has slower convergence but canhandle nonlinear convex constraints. Nevertheless, the dual link algorithm uses a different derivationapproach, which is based on the optimal transmit signal structure, and easily leads to a low complexitydistributed algorithm. Thus, the special case of total power constraint provides a different view and insightthan the general multiple linear constraint case.The rest of this paper is organized as follows. Section II presents the system model, formulates theproblem, and briefly reviews the related results on the rate duality and polite water-filling structure.Section III proposes the new algorithm and establishes its monotonic convergence. Section IV-A showshow to modify the dual link algorithm for the environment with colored noise and discusses distributedimplementation. Numerical examples are presented in Section V. Complexity analysis is provided in (cid:1)(cid:2) (cid:1) (cid:3)(cid:2) (cid:0) (cid:1)(cid:2) (cid:2) (cid:3)(cid:2) (cid:3) (cid:1)(cid:2) (cid:4) (cid:3)(cid:2) (cid:5) (cid:1)(cid:2) (cid:6) (cid:3)(cid:2) (cid:7) (cid:1) (cid:1) (cid:1) (cid:2) (cid:3) (cid:4) (cid:5)
Figure 1. An example of B-MAC network. The solid lines represent data links and the dash lines represent interference.
Section VI. Section VII concludes. II. P
RELIMINARIES
In this section, we describe the system model and formulate the optimization problem, then brieflyreview some related results on the polite water-filling, which leads to the design of the dual link algorithm.
A. B-MAC Interference Networks
We consider a general interference network named MIMO B-MAC network with multiple transmittersand receivers [8], [10]. A transmitter in the MIMO B-MAC network may send independent data todifferent receivers, like in BC, and a receiver may receive independent data from different transmitters,like in MAC. Assume there are totally L mutually interfering data links in a B-MAC network. Link l ’sphysical transmitter is T l , which has L T l many antennas. Its physical receiver is R l , which has L R l manyantennas. Figure 1 shows an example of B-MAC networks with five data links. Link 2 and 3 have thesame physical receiver. Link 3 and 4 have the same physical transmitter. When multiple data links havethe same receiver or the same transmitter, interference cancellation techniques such as successive decodingand cancellation or dirty paper coding can be applied [10]. The received signal at R l is y l = L X k =1 H l,k x k + n l , (1) where x k ∈ C L Tk × is the transmit signal of link k and is modeled as a circularly symmetric complexGaussian vector; H l,k ∈ C L Rl × L Tk is the channel state information (CSI) matrix between T k and R l ; and n l ∈ C L Rl × is a circularly symmetric complex Gaussian noise vector with identity covariance matrix. Thecircularly symmetric assumption of the transmit signal can be dropped easily by applying the proposedalgorithm to real Gaussian signals with twice the dimension. Multiple channel uses can be combined intoa larger B-MAC networks with parallel channels, like in interference alignment [1]. B. Problem Formulation
Assuming the channels are known at both the transmitters and receivers (CSITR), an achievable rateof link l is I l ( Σ L ) = log (cid:12)(cid:12)(cid:12) I + H l,l Σ l H † l,l Ω − l (cid:12)(cid:12)(cid:12) (2)where Σ l is the covariance matrix of x l ; and Ω l is the interference-plus-noise covariance matrix of the l th link, Ω l = I + L X k =1 ,k = l H l,k Σ k H † l,k . (3)If the interference from link k to link l is completely canceled using successive decoding and cancellationor dirty paper coding, we can simply set H l,k = in (3). Otherwise, the interference is treated as noise.This allows this paper to cover a wide range of communication techniques.The optimization problem that we want to solve is the weighted sum-rate maximization under a totalpower constraint: WSRM_TP : max Σ L L X l =1 w l I l ( Σ L ) (4)s.t. Σ l (cid:23) , ∀ l, L X l =1 Tr ( Σ l ) ≤ P T , where w l > is the weight for link l . The generalization to multiple linear constraints as in [9] is givenin [2], which covers the individual power constraints as a special case. C. Rate Duality and Polite Water-filling
We review the relevant results on the non-convex optimization (4) given in [10]. Dual network, reverselinks, and rate duality were introduced. The optimal structure of the transmit signal covariance matrices is polite water-filling structure, whose definition involves the reverse link interference plus noise covariancematrices. It suggests an iterative polite water-filling algorithm, which is compared with the new algorithmin this paper. The polite water-filling structure was used to derive a dual transformation, based on whichthe new algorithm in this paper has been designed.
A Dual Network and the Reverse Links:
A virtual dual network can be created from the original B-MAC network by reversing the roles of all transmitters and receivers and replacing the channel matriceswith their conjugate transpose. The data links in the original networks are denoted as forward links whilethose in the dual network are denoted as reverse links . We use ˆ to denote the corresponding terms in thereverse links. The interference-plus-noise covariance matrix of reverse link l is ˆ Ω l = I + L X k =1 ,k = l H † k,l ˆ Σ k H k,l , (5)where ˆ Σ k is the transmit signal covariance matrix of reverse link k . The achievable rate of reverse link l is ˆ I l (cid:16) ˆ Σ L (cid:17) = log (cid:12)(cid:12)(cid:12) I + H † l,l ˆ Σ l H l,l ˆ Ω − l (cid:12)(cid:12)(cid:12) . (6)A dual optimization problem corresponding to 4 can formulated as WSRM_TP_D : max Σ L L X l =1 w l ˆ I l (cid:16) ˆ Σ L (cid:17) (7)s.t. ˆ Σ l (cid:23) , ∀ l, L X l =1 Tr (cid:16) ˆ Σ l (cid:17) ≤ P T . Rate Duality:
The rate duality states that the achievable rate regions of the forward link channels (cid:16) [ H l,k ] , P Ll =1 Tr ( Σ l ) ≤ P T (cid:17) and reverse link channels (cid:16)h H † k,l i , P Ll =1 Tr (cid:16) ˆ Σ l (cid:17) ≤ P T (cid:17) are the same [10].The achievable rate regions are defined using rates in (2) and (6). A covariance transformation in [10]calculates the reverse link input covariance matrices ˆ Σ l ’s from the forward ones Σ l ’s. The rate duality isproved by showing that these calculated ˆ Σ l ’s achieves equal or higher rates than the forward link ratesemploying Σ l ’s under the same value of power constraint P T [10]. Polite Water-filling Structure:
We review the polite water-filling results from [10]. The Lagrangefunction of problem (4) is L ( µ, Θ L , Σ L )= L X l =1 w l log (cid:12)(cid:12)(cid:12) I + H l,l Σ l H † l,l Ω − l (cid:12)(cid:12)(cid:12) + L X l =1 Tr ( Σ l Θ l )+ µ P T − L X l =1 Tr ( Σ l ) ! , where Θ L and µ are Lagrange multipliers. The KKT conditions are ∇ Σ l L = w l H † l,l (cid:16) Ω l + H l,l Σ l H † l,l (cid:17) − H l,l + Θ l − µ I − X k = l w k H † k,l (cid:18) Ω − k − (cid:16) Ω k + H k,k Σ k H † k,k (cid:17) − (cid:19) H k,l = , (8) µ P T − L X l =1 Tr ( Σ l ) ! = 0 , tr ( Σ l Θ l ) = 0 , Σ l , Θ l < , µ ≥ . At a stationary point of problem (4), the transmit signal covariance matrices Σ L have the polite water-filling structure [10]. Recall that in a single user MIMO channel, the optimal Σ is a water-filling overchannel H , i.e., the eigenvectors of Σ are the right singular vectors of H and the eigenvalues are calculatedusing water-filling of parallel channels with singular values of H as channel gains. The polite water-filling structure is that the equivalent transmit covariance matrix ˆ Ω l Σ l ˆ Ω l is a water-filling over theequivalent post- and pre-whitened channel ¯ H l = Ω − l H l,l ˆ Ω − l , where the reverse link interference plusnoise covariance ˆ Ω l is calculated from ˆ Σ L , and ˆ Σ L are calculated from Σ L using the above mentionedcovariance transformation. The ˆ Σ L also have the polite water-filling structure and are the stationary pointof the reverse link optimization problem (7). In the case of parallel channels, the polite water-filling willreduce to the traditional water-filling. In MAC/BC, polite water-filling structure is the optimal transmitsignal structure for the capacity region boundary points. Polite Water-filling Algorithm:
The polite water-filling structure naturally suggests the iterative politewater-filling algorithm , Algorithm PP, in [10]. It works as follows. After initializing the reverse linkinterference plus noise covariance matrices ˆ Ω L , we perform a forward link polite water-filling to obtain Σ L . The reverse link polite water-filling is performed to obtain ˆ Σ L . This finishes one iteration. Theiterations stop when the change of the objective function is less than a threshold or when a predeterminednumber of iterations is reached. Because the algorithm enforces the optimal signal structure at eachiteration, it converges very fast if it converges. In particular, for parallel channels, it gives the optimalsolution in half an iteration with initial values ˆ Ω l = I , ∀ l . Unfortunately, this algorithm is not guaranteedto converge, especially in very strong interference cases. Dual Transformation:
The following relations between Σ L and ˆ Σ L at stationary points are provedusing the polite water-filling structure in [10]. We name them dual transformation in this paper: ˆ Σ l = w l µ (cid:18) Ω − l − (cid:16) Ω l + H l,l Σ l H † l,l (cid:17) − (cid:19) , l = 1 , . . . , L ; (9) Σ l = w l ˆ µ (cid:18) ˆ Ω − l − (cid:16) ˆ Ω l + H † l,l ˆ Σ l H l,l (cid:17) − (cid:19) , l = 1 , . . . , L, (10)where the Lagrange multipliers µ and ˆ µ are the Lagrange multipliers of the forward and reverse links forthe power constraints. Equation (9) can be substituted into the KKT condition (8) to recover the politewater-filling solution to the KKT conditions. In past works, the term w l µ (cid:18) Ω − l − (cid:16) Ω l + H l,l Σ l H † l,l (cid:17) − (cid:19) in the KKT condition has always been the obstacle to an elegant solution. Now we know it equals to ˆ Σ l at a stationary point. The dual transformation is used in the next section to design a new convergentalgorithm. III. T HE D UAL L INK A LGORITHM
A. The Algorithm
We propose a new algorithm, named Dual Link Algorithm, for the weighted sum-rate maximizationproblem (4). It has fast and monotonic convergence. The main idea is that, since we already know theoptimal input covariance matrices Σ L and ˆ Σ L must satisfy the dual transformation (9) and (10), we candirectly use these the dual transformation to update ˆ Σ L and Σ L , instead of solving the KKT conditionsand enforce the polite water-filling structure of ˆ Σ L and Σ L as in the polite water-filling algorithms[10]. It is well known that equality P Ll =1 Tr ( Σ l ) = P T holds when Σ L is a stationary point of problem(4), e.g., [8, Theorem 8 (item 3)]. This is because of the nonzero noise variance. It indicates that the fullpower should always be used. Since the covariance transformation [10, Lemma 8] preserves total power,we also have P Ll =1 Tr (cid:16) ˆ Σ l (cid:17) = P T . The Lagrange multipliers µ and ˆ µ should be chosen to satisfy the powerconstraint P Ll =1 Tr ( Σ l ) = P T as µ = 1 P T L X l =1 w l tr (cid:18) Ω − l − (cid:16) Ω l + H l,l Σ l H † l,l (cid:17) − (cid:19) (11) ˆ µ = 1 P T L X l =1 w l tr (cid:18) ˆ Ω − l − (cid:16) ˆ Ω l + H † l,l ˆ Σ l H l,l (cid:17) − (cid:19) (12)The above suggests the Dual Link Algorithm in Table Algorithm 1 that takes advantage of the structureof the weighted sum-rate maximization problem. A node who knows global channel state information runsthe algorithm. The algorithm starts by initializing Σ l ’s as random matrices or scaled identity matrices,which can be used to calculate forward link interference plus noise covariance Ω l ’s. Then, ˆ Σ l ’s of thevirtual reverse links can be calculated by the dual transformation (9) with µ given in (11). These ˆ Σ l ’sare used to calculate virtual reverse link interference plus noise covariance matrices ˆ Ω l ’s. Then, Σ l ’s ofthe forward links can be calculated by the dual transformation (10) with ˆ µ given in (12). The above isrepeated until the weighted sum rate converges or a fixed number of iterations are reached.The most important properties of the dual link algorithm is that, unlike other algorithms for this problem,it is ideally suited for distributed implementation and is scalable to network size. This will be discussedbriefly in Section IV-B.As confirmed by the proof and numerical experiments, Dual Link Algorithm has monotonic convergenceand is almost as fast as the polite water-filling (PWF) algorithm. It converges to a stationary point ofboth problem (4) and its dual (7) simultaneously, and both (9) and (10) achieve the same sum-rate at thestationary point. B. Preliminaries of the Convergence Proof
In the following sections, we prove the monotonic convergence of Algorithm 1. As will be seen later,the proof uses only very general convex analysis, as well as a particular scaling invariance property thatwe identify for the weighted sum-rate maximization problem. We expect that the scaling invariance holds Algorithm 1
Dual Link Algorithm1. Initialize Σ l ’s, s.t. P Ll =1 Tr ( Σ l ) = P T R ⇐ P Ll =1 w l I l ( Σ L )
3. Repeat4. R ′ ⇐ R Ω l ⇐ I + P k = l H l,k Σ k H † l,k ˆ Σ l ⇐ P T w l (cid:16) Ω − l − ( Ω l + H l,l Σ l H † l,l ) − (cid:17)P Ll =1 w l tr (cid:16) Ω − l − ( Ω l + H l,l Σ l H † l,l ) − (cid:17) ˆ Ω l ⇐ I + P k = l H † k,l ˆ Σ k H k,l Σ l = P T w l (cid:16) ˆ Ω − l − ( ˆ Ω l + H † l,l ˆ Σ l H l,l ) − (cid:17)P Ll =1 w l tr (cid:16) ˆ Ω − l − ( ˆ Ω l + H † l,l ˆ Σ l H l,l ) − (cid:17) R ⇐ P Ll =1 w l I l ( Σ L )
10. until (cid:12)(cid:12) R − R ′ (cid:12)(cid:12) ≤ ǫ or a fixed number of iterations are reached.for and our proof technique applies to many non-convex problems in communication networks that involvethe rate or throughput maximization.
1) Equivalent Problem and the Lagrange Function:
The weighted sum-rate maximization problem (4)is equivalent to the following problem by considering the interference plus noise covariance matrices asadditional variables with additional equality constraints: max Σ L , Ω L L X l =1 w l (cid:16) log (cid:12)(cid:12)(cid:12) Ω l + H l,l Σ l H † l,l (cid:12)(cid:12)(cid:12) − log | Ω l | (cid:17) s.t. Σ l (cid:23) , ∀ l, L X l =1 Tr ( Σ l ) ≤ P T , Ω l = I + X k = l H l,k Σ k H † l,k , ∀ l, (13)which is still non-convex. Consider the Lagrangian of the above problem F ( Σ , Ω , Λ , µ )= L X l =1 w l (cid:16) log (cid:12)(cid:12)(cid:12) Ω l + H l,l Σ l H † l,l (cid:12)(cid:12)(cid:12) − log | Ω l | (cid:17) + µ { P T − L X l =1 Tr ( Σ l ) } + L X l =1 Tr Λ l Ω l − I − X k = l H l,k Σ k H † l,k !! , where Σ represents Σ L ; Ω represents Ω L ; Λ represents Λ L ; the domain of F is { Σ , Ω , Λ , µ | Σ l ∈ H L Tl × L Tl + , Ω l ∈ H L Rl × L Rl ++ , Λ l ∈ H L Rl × L Rl , µ ∈ R + , ∀ l } . Here H n × n , H n × n + , and H n × n ++ are the sets of n × n Hermitian matrices, positive semidefinite matrices, and positive definite matrices respectively.One can easily verify that the function F is concave in Σ and convex in Ω . Furthermore, the gradientsare given by ∇ Σ l F = w l H † l,l (cid:16) Ω l + H l,l Σ l H † l,l (cid:17) − H l,l − µ I − X k = l H † k,l Λ l H k,l , ∇ Ω l F = w l (cid:18)(cid:16) Ω l + H l,l Σ l H † l,l (cid:17) − − Ω − l (cid:19) + Λ l . Now suppose that we have the pair ( Σ , Ω ) such that L X l =1 Tr ( Σ l ) = P T , Ω l = I + X k = l H l,k Σ k H † l,k , then, F ( Σ L , Ω L , Λ L , µ )= L X l =1 w l (cid:16) log (cid:12)(cid:12)(cid:12) Ω l + H l,l Σ l H † l,l (cid:12)(cid:12)(cid:12) − log | Ω l | (cid:17) , which is the original weighted sum-rate function. For notational simplicity, denote the weighted sum-ratefunction by V (Σ) , i.e., V ( Σ ) = L X l =1 w l log (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) I + X k = l H l,k Σ k H † l,k + H l,l Σ l H † l,l (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − log (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) I + X k = l H l,k Σ k H † l,k (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)! .
2) Solution of the first-order condition:
Suppose that we want to solve the following system of equationsin terms of ( Σ , Ω ) for given ( Λ , µ ) : ∇ Σ l F = 0 , ∇ Ω l F = 0 . Define ˆ Σ l = 1 µ Λ l , ˆ Ω l = I + X k = l H † k,l ˆ Σ l H k,l , the above system of equations becomes ˆ Σ l = w l µ (cid:18) Ω − l − (cid:16) Ω l + H l,l Σ l H † l,l (cid:17) − (cid:19) , (14) ˆ Ω l = w l µ H † l,l (cid:16) Ω l + H l,l Σ l H † l,l (cid:17) − H l,l . (15)An explicit solution to this system of equations is given by Σ l = w l µ (cid:18) ˆ Ω − l − (cid:16) ˆ Ω l + H † l,l ˆ Σ l H l,l (cid:17) − (cid:19) (16) Ω l = w l µ H l,l (cid:16) H † l,l ˆ Σ l H l,l + ˆ Ω l (cid:17) − H † l,l . (17)The detailed proof of this solution can be found in [2], [15]. Remark . (16) and (17) are actually the first-order optimality conditions of (13)’s dual problem which isequivalent to (7). Algorithm 1 uses (14) and (16) to update ˆ Σ L and Σ L . When it converges, equations(14)-(17) will all hold, and the KKT conditions of problem (13) and its dual will all be satisfied. C. Convergence Results
We are ready to present the following two main convergence results regarding Algorithm 1. Denote by Σ ( n ) the Σ value at the n -th iteration of Algorithm 1. Theorem 2.
The objective value, i.e., the weighted sum-rate, is monotonically increasing in Algorithm 1, i . e . , V ( Σ ( n ) ) ≤ V ( Σ ( n +1) ) . From the above theorem, the following conclusion is immediate.
Corollary 3.
The sequence V n = V ( Σ ( n ) ) converges to some limit point V ∞ . Proof:
Since V ( Σ ) is a continuous function and its domain { Σ | Σ l (cid:23) , Tr ( Σ ) ≦ P T , ∀ l } is a compactset, V n is bounded above. From Theorem 2, { V n } is a monotone increasing sequence, therefore there existsa limit point V ∞ such that lim n →∞ V n = V ∞ .If we define a stationary point ( Σ ⋆L ) of Algorithm 1, Σ ( n ) = Σ ⋆ implies Σ ( n + k ) = Σ ⋆ for all k =0 , , · · · , then we have the following result. Theorem 4.
Algorithm 1 converges to a stationary point Σ ⋆ L .The above implies that both the weighted sum rate and the transmit signal covariance matrices converge.The proof of Theorems 2 and 4 will be presented later in this section. Before that, we first establish afew inequalities and identify a particular scaling property of the Lagrangian F .
1) The first inequality:
Suppose that we have a feasible point Σ ( n ) (cid:23) , and L X l =1 Tr (cid:16) Σ ( n ) l (cid:17) = P T . (18)In Algorithm 1, we generate Ω ( n ) l such that Ω ( n ) l = I + X k = l H l,k Σ ( n ) k H † l,k . (19)Now we have a pair ( Σ ( n ) , Ω ( n ) ) . Using this pair, we can compute ( Λ ( n )1: L , µ ( n ) ) as Λ ( n ) l = w l (cid:18) Ω ( n ) l − − (cid:16) Ω ( n ) l + H l,l Σ ( n ) l H † l,l (cid:17) − (cid:19) ,µ ( n ) = 1 P T L X l =1 Tr (cid:16) Λ ( n ) l (cid:17) . Note that ˆΣ ( n ) l in Algorithm 1 is equal to ˆΣ ( n ) l = Λ ( n ) l µ ( n ) . From this and (14), the gradient of F with respect to Ω at the point ( Σ ( n ) , Ω ( n ) ) vanishes, i.e., ∇ Ω F ( Σ ( n ) , Ω , Λ ( n ) , µ ( n ) ) | Ω ( n ) = 0 . Since F is convex in Ω , if we fix Σ = Σ ( n ) , then Ω ( n ) is a global minimizer of F . In other words, F ( Σ ( n ) , Ω ( n ) , Λ ( n ) , µ ( n ) ) ≤ F ( Σ ( n ) , Ω , Λ ( n ) , µ ( n ) ) (20) for all Ω ≻ .
2) Scaling invariance of F : We will identify a remarkable scaling invariance property of F , whichplays a key role in the convergence proof of Algorithm 1. For given ( Σ ( n ) , Ω ( n ) , Λ ( n ) , µ ( n ) ) , we have F ( 1 α Σ ( n ) , α Ω ( n ) , α Λ ( n ) , αµ ( n ) )= F ( Σ ( n ) , Ω ( n ) , Λ ( n ) , µ ( n ) ) (21)for all α >
0. To show this scaling invariance property, note that Ω ( n ) l − X k = l H l,k Σ ( n ) k H † l,k = I , L X l =1 Tr ( Σ ( n ) l ) = P T ,P T µ ( n ) = L X l =1 Tr ( Λ ( n ) l ) . Applying the above equalities and some mathematical manipulations, we have F ( 1 α Σ ( n ) , α Ω ( n ) , α Λ ( n ) , αµ ( n ) )= L X l =1 w l (cid:16) log (cid:12)(cid:12)(cid:12) Ω ( n ) l + H l,l Σ ( n ) l H † l,l (cid:12)(cid:12)(cid:12) − log (cid:12)(cid:12)(cid:12) Ω ( n ) l (cid:12)(cid:12)(cid:12)(cid:17) + αµ ( n ) { P T − α P T } + L X l =1 Tr (cid:18) α Λ ( n ) l (cid:18) α I − I (cid:19)(cid:19) = L X l =1 w l (cid:16) log (cid:12)(cid:12)(cid:12) Ω ( n ) l + H l,l Σ ( n ) l H † l,l (cid:12)(cid:12)(cid:12) − log (cid:12)(cid:12)(cid:12) Ω ( n ) l (cid:12)(cid:12)(cid:12)(cid:17) +( α − µ ( n ) P T + (1 − α ) L X l =1 Tr ( Λ ( n ) l )= L X l =1 w l (cid:16) log (cid:12)(cid:12)(cid:12) Ω ( n ) l + H l,l Σ ( n ) l H † l,l (cid:12)(cid:12)(cid:12) − log (cid:12)(cid:12)(cid:12) Ω ( n ) l (cid:12)(cid:12)(cid:12)(cid:17) = F ( Σ ( n ) , Ω ( n ) , Λ ( n ) , µ ( n ) ) , where the first equality uses the fact that log (cid:12)(cid:12)(cid:12)(cid:12) α (cid:16) Ω ( n ) l + H l,l Σ ( n ) l H † l,l (cid:17)(cid:12)(cid:12)(cid:12)(cid:12) − log (cid:12)(cid:12)(cid:12)(cid:12) α Ω ( n ) l (cid:12)(cid:12)(cid:12)(cid:12) = log (cid:12)(cid:12)(cid:12) Ω ( n ) l + H l,l Σ ( n ) l H † l,l (cid:12)(cid:12)(cid:12) − log (cid:12)(cid:12)(cid:12) Ω ( n ) l (cid:12)(cid:12)(cid:12) . Furthermore, ∇ Ω l F ( 1 α Σ ( n ) , Ω , α Λ ( n ) , αµ ( n ) ) | α Ω ( n ) = w l (cid:18) α Ω ( n ) l + H l,l α Σ ( n ) l H † l,l (cid:19) − − (cid:18) α Ω ( n ) l (cid:19) − ! + α Λ ( n ) l = α ∇ Ω l F ( Σ ( n ) , Ω , Λ ( n ) , µ ( n ) ) | Ω ( n ) = 0 , ∀ l. (22)Therefore, α Ω ( n ) is a global minimizer of F ( α Σ ( n ) , Ω , α Λ ( n ) , αµ ( n ) ) , as F is convex in Ω .
3) The second and third inequalities:
Given ( α Λ ( n ) , αµ ( n ) ) , we generate ˜ Σ , ˜ Ω using equation (16) and(17). If we choose α so that L X l =1 Tr( ˜ Σ l ) = P T , (23)then ˜ Σ = Σ ( n +1) in Algorithm 1. Since ( Σ ( n +1) , ˜ Ω ) is chosen to make the gradients zero: ∇ Σ F ( Σ , ˜Ω , α Λ ( n ) , αµ ( n ) ) | Σ ( n +1) = 0 , ∇ Ω F ( Σ ( n +1) , Ω , α Λ ( n ) , αµ ( n ) ) | ˜ Ω = 0 , we conclude that Σ ( n +1) is a global maximizer, i.e., F ( Σ , ˜ Ω , α Λ ( n +1) , αµ ( n +1) ) ≤ F ( Σ ( n +1) , ˜ Ω , α Λ ( n ) , αµ ( n ) ) (24)for all Σ (cid:23) ; and ˜ Ω is a global minimizer, i.e., F ( Σ ( n +1) , ˜ Ω , α Λ ( n ) , αµ ( n ) ) ≤ F ( Σ ( n +1) , Ω , α Λ ( n ) , αµ ( n ) ) (25)for all Ω ≻ .
4) Proof of Theorem 2:
With the three inequalities (20, 24, 25) obtained above, we are ready to proveTheorem 2. As in Algorithm 1 Ω ( n +1) l = I + X k = l H l,k Σ ( n +1) k H † l,k , (26) we have V ( Σ ( n ) )= F ( Σ ( n ) , Ω ( n ) , Λ ( n ) , µ ( n ) ) (27) = F ( 1 α Σ ( n ) , α Ω ( n ) , α Λ ( n ) , αµ ( n ) ) (28) ≤ F ( 1 α Σ ( n ) , ˜ Ω , α Λ ( n ) , αµ ( n ) ) (29) ≤ F ( Σ ( n +1) , ˜ Ω , α Λ ( n ) , αµ ( n ) ) (30) ≤ F ( Σ ( n +1) , Ω ( n +1) , α Λ ( n ) , αµ ( n ) ) (31) = V ( Σ ( n +1) ) , (32)where (27) follows from the satisfied constraints (18, 19); (28) follows from the scaling invariance (21);(29) follows from convexity and scaling invariance (20, 22); (30) follows from the second inequality (24);(31) follows from the third inequality (25); (32) follows from the satisfied constraints (23, 26).
5) Proof of Theorem 4:
We have shown in Corollary 3 that V n converges to a limit point under Algorithm1. To show the convergence of the algorithm, it is enough to show that if V ( Σ ( n ) ) = V ( Σ ( n +1) ) , then Σ ( n +1) = Σ ( n + k ) for all k = 1 , , · · · . Suppose V ( Σ ( n ) ) = V ( Σ ( n +1) ) , then from the proof in the above,we have F ( Σ ( n +1) , Ω ( n +1) , α Λ ( n ) , αµ ( n ) )= F ( Σ ( n +1) , ˜ Ω , α Λ ( n ) , αµ ( n ) ) . Since ˜ Ω is a global minimizer, the above equality implies Ω ( n +1) is a global minimizer too. From thefirst order condition for optimality, we have ∇ Ω l F ( Σ ( n +1) , Ω , α Λ ( n ) , αµ ( n +1) ) | Ω ( n +1) = w l (cid:18)(cid:16) Ω ( n +1) l + H l,l Σ ( n +1) l H † l,l (cid:17) − − { Ω ( n +1) l } − (cid:19) + α Λ ( n ) l = 0 . On the other hand, we generate Λ ( n +1) such that Λ ( n +1) l = w l (cid:18) Ω ( n +1) l − − (cid:16) Ω ( n +1) l + H l,l Σ ( n +1) l H † l,l (cid:17) − (cid:19) = α Λ ( n ) l . This shows ˆ Σ ( n +1) ∝ ˆ Σ ( n ) . However, since the trace of each matrix is same, we conclude that ˆ Σ ( n +1) = ˆ Σ ( n ) . From this it is obvious that ˆ Σ ( n ) = ˆ Σ ( n +1) = · · · and Σ ( n +1) = Σ ( n +2) = · · · . Remark . When the algorithm converges, the pair ( Σ , Ω ) satisfies the first order optimality conditionfor F . Moreover, since P Ll =1 Tr ( Σ l ) = P T , and Ω l = I + P k = l H l,k Σ k H † l,k , ∀ l , ( α Λ , αµ ) also satisfiesthe first order optimality condition for F . This implies that the pair ( Σ , Ω , α Λ , αµ ) is a saddle point of F , which means that they are indeed a primal-dual point that solves the KKT system of the weightedsum-rate maximization. IV. E XTENSIONS
In this section, we extend the dual link algorithm for the cases of colored noise and weighted sumpower constraint. We also discuss the dual link algorithm’s advantages in distributed implementation.
A. Colored Noise and Weighted Power Constraint
In practice, the noise, including thermal noise and interference from other non-cooperating networks,may not be white as assumed in (3). Due to nonuniform devices, we may use weighted sum powerconstraint. The dual link algorithm can be easily adjusted to solve the weighted sum-rate maximizationproblem with colored noise and a weighted power constraint. The key idea is to use proper pre- andpost-whitening method to convert the problem back to the white noise and sum power constraint form,while keeping the reciprocity property of the forward and reverse link channels so that the original duallink algorithm can be readily applied.We use the following equivalence to find the proper whitening algorithm [10]. Assume the noisecovariance matrix of forward link l is a positive definite matrix W l ∈ C L Rl × L Rl instead of I . The weighted sum power constraint is P Ll =1 Tr (cid:16) Σ l ˆ W l (cid:17) ≤ P T , where ˆ W l ∈ C L Tl × L Tl is a positive definite weight matrix, ∀ l . The whitened received signal at receiver R l is a sufficient statistic, y ′ l = W − l y l = L X k =1 (cid:16) W − l H l,k ˆ W − l (cid:17) (cid:16) ˆ W l x k (cid:17) + W − l n l = L X k =1 H ′ l,k x ′ k + n ′ l , which is the same as the received signal of a network with equivalent channel state H ′ l,k = W − l H l,k ˆ W − l (33)and equivalent transmit signal x ′ k = ˆ W l x k . We name the left multiplication of H l,k by W − l as post-whitening and right multiplication of H l,k by ˆ W − l as pre-whitening. Note that Σ l = E [ x l x † l ] , Σ ′ l = E [ x ′ l ( x ′ l ) † ] = ˆ W l Σ l ˆ W l , and Tr (cid:16) Σ l ˆ W l (cid:17) = Tr (cid:16) ˆ W l Σ l ˆ W l (cid:17) = Tr ( Σ ′ l ) . Thus, the original networkspecified by [ H l,k ] , L X l =1 Tr (cid:16) Σ l ˆ W l (cid:17) ≤ P T , [ W l ] ! is equivalent to the network specified by h W − / l H l,k ˆ W − / k i , L X l =1 Tr ( Σ ′ l ) ≤ P T , [ I ] ! , (34)which has white noise and sum power constraint.Therefore, we can apply the dual link algorithm to the equivalent network (34) and find optimal Σ ′ l .Then the solution to the original network is recovered by Σ l = ˆ W − l Σ ′ l ˆ W − l because ˆ W l is non-singular. B. Distributed Implementation
In practical applications, distributed algorithm with low coordination overhead is highly desirable. Itturns out that the dual link algorithm is ideally suited for distributed implementation.A centralized implementation of the dual link incurs overhead. The algorithm needs global channel stateinformation H k,l , ∀ k, l , as do other algorithms. The collection of global channel state information wastesbandwidth and incur large delays for large networks. If the delay is too long, there won’t be enough timeleft for actual data transmission before the channels change. Fortunately, a distributed implementation of the dual link algorithm needs minimal local channel stateinformation compared to other algorithms, especially in TDD networks. A node only needs to estimatetotal received signal covariance and desired link channel state information. No channel state information ofinterfering links is needed. And the nodes do not need to exchange channel state information because non-local CSI is not needed. In a TDD network, assuming channel reciprocity, the virtual reverse link iteration,Step 8 of Algorithm 1, can be carried out in physical reverse links. Assume reverse link l transmits pilotsignals using beamforming matrix ˆ V l , where ˆ V l ˆ V † l = ˆ Σ l . To calculate Σ l , we need ˆ Ω l and reverse linktotal received signal covariance ˆ Ω l + H † l,l ˆ Σ l H l,l , which can be estimated at node T l because the channelhas done the summation of signal, interference, and noise for us for free, no matter how large the networkis. Using the pilot signal, H † l,l ˆ V l can be estimated and H † l,l ˆ Σ l H l,l = H † l,l ˆ V l ˆ V † l H l,l can be calculated andsubtracted from ˆ Ω l + H † l,l ˆ Σ l H l,l to obtain ˆ Ω l . All reverse links only need to share the scalar ˆ µ in (10)to adjust the total power. The forward link calculation can be done similarly. By avoiding global CSIcollection, the distributed dual link algorithm has significant lower signaling overhead compared to othermethods. The details of the distributed and scalable implementation and accommodation of time varyingchannels will be presented in an upcoming paper [7].For distributed implementation in the case of colored noise and weighted sum power constraint discussedin Section IV-A, to create forward link H ′ l,k in (33), we can left multiply the transmit signal by ˆ W − / k before transmission and right multiply the received signal by W − / l . In reverse links, it can be achievedby left multiplying the transmit signal by W − / l before transmission and right multiplying the receivedsignal by ˆ W − / k . The dual link algorithm assumes the reverse link channel noises are white. If the noisecovariance matrix of the reverse link k of the TDD system is ˆ N k , we can estimate ˆ W − / k ˆ N k ˆ W − / k andreplace it by I in the distributed algorithm.V. S IMULATION R ESULTS
In this section, we provide numerical examples to compare the proposed dual link algorithm withthe PWF algorithm [10], the WMMSE algorithm [12], and the iterative water-filling algorithms for sumcapacity of MAC and BC channels [5], [16]. We consider a B-MAC network with data links among10 transmitter-receiver pairs that fully interfere with each other. Each link has 3 transmit antennas and4 receive antennas. For each simulation, the channel matrices are independently generated and fixed byone realization of H l,k = √ g l,k H (W) l,k , ∀ k, l , where H (W) l,k has zero-mean i.i.d. circularly symmetric complex Number of Half Iterations W e i gh t ed S u m - R a t e ( b i t s / s / H z ) forward linkreverse link Figure 2. The monotonic convergence of the forward and reverse link weighted sum-rates of the Dual Link algorithm with P T = 100 and g l,k = 0 dB , ∀ l, k . Gaussian entries with unit variance and g l,k is the average channel gain. The rate weights w l ’s are uniformlychosen between 0.5 to 1. The total transmit power P T = 100 .Figure 2 shows the convergence of the Dual Link algorithm for a network with g l,k = 0 dB , ∀ l, k . Fromthe proof of Theorem 2, the weighted sum-rate of the forward links and that of the reverse links not onlyincrease monotonically over iterations, but also increase over each other over half iterations. In Algorithm1, the reverse link transmit signal covariance matrices ˆ Σ l are updated in the first half of each iteration,and the forward link transmit signal covariance matrices Σ l are updated in the second half. From Figure2, we see that the weighted sum-rates of the forward links and reverse links increase in turns until theyconverge to the same value, which also confirms that problem (4) and its dual problem (7) reach theirstationary points at the same time.Figure 3 shows a typical case of rate versus number of iterations under the weak interference setting( g l,l = 0 dB and g l,k = − dB for l = k ). For a channel realization and a realization of random initialpoint, PWF algorithm converges slightly faster than the dual link algorithm. In contrast, the WMMSEalgorithm’s convergence is slower than the dual link algorithm, e.g., eight iterations to achieve what duallink algorithm achieves in four iterations and more than ten iterations slower to reach some higher valueof the weighted sum-rate. When the gain of the interfering channels are comparable to that of the desiredchannel, the difference in the convergence speed between the PWF/Dual Link algorithm and the WMMSEalgorithm is less than that of the weak interference case but can still be around five iterations for some Number of Iterations0 2 4 6 8 10 12 14 16 18 20 W e i gh t ed S u m - R a t e ( b i t s / s / H z ) Dual Link AlgorithmPWF AlgorithmWMMSE Algorithm
Figure 3. PWF algorithm vs. WMMSE algorithm vs. dual link algorithm under weak interference with P T = 100 , g l,l = 0 dB and g l,k = − dB for l = k . Number of Iterations0 2 4 6 8 10 12 14 16 18 20 W e i gh t ed S u m - R a t e ( b i t s / s / H z ) Dual Link AlgorithmPWF AlgorithmWMMSE Algorithm
Figure 4. PWF algorithm vs. WMMSE algorithm vs. dual link algorithm under strong interference with P T = 100 , g l,l = 0 dB and g l,k = 10 dB for l = k . high value of the weighted sum-rate. Under strong interference setting, the PWF algorithm may oscillateand no longer converge as shown in Figure 4. The dual link algorithm still converges slightly faster thanthe WMMSE algorithm.Table I shows the average convergence speed of these three algorithms under different interferencelevels. The results are obtained by averaging over 1000 independent channel realizations under eachinterference setting. Under the strong interference setting ( g l,k = 10 dB), only the converged cases (837out of 1000) are considered for the PWF algorithm. We can see that the dual link algorithm outperforms Table IA
VERAGE NUMBER OF ITERATIONS NEEDED TO REACH
AND
OF THE LOCAL OPTIMUM FOR
PWF, WMMSE,
AND DUAL LINKALGORITHMS . P T = 100 , g l,l = 0 D B AND g l,k = − / / D B FOR l = k . g l,k / Threshold PWF Dual Link WMMSE-10 dB / 90% 1.199 1.653 2.5210 dB / 90% 6.075 7.211 9.55710 dB / 90% 5.217 4.745 5.847-10 dB / 95% 2.140 2.781 5.0670 dB / 95% 11.465 12.408 15.86410 dB / 95% 9.358 6.837 10.549 W e i gh t ed S u m − R a t e ( b i t s / s / H z ) Dual Link Algorithm on Dual MACDual Link Algorithm on BCIterative Water−filling on Dual MAC
Figure 5. Dual Link algorithm vs. Iterative water-filling on MAC and BC channels
WMMSE algorithm in all the settings. While the PWF algorithm is slightly ahead under weak and moderateinterference settings, it is slower than the dual link algorithm under the strong interference setting.Given the same initial point, these three algorithms may converge to different stationary points. Sincethe original weighted sum-rate maximization problem is non-convex, a stationary point is not necessarilya global maximum. In practical applications, one may run such algorithms multiple times starting fromdifferent initial points and choose the best.Figure compares the dual link algorithm with the iterative water-filling algorithm proposed in [5],[16] for sum capacity of BC and MAC channels. The iterative water-filling algorithm of [5], [16] worksbecause that the sum rate can be written in a single user rate form and because of the duality betweenMAC and BC channels. We consider a BC channel and its dual MAC channel with a sum power constraint.The BC channel contains 1 transmitter and 10 receivers, each equipped with 5 antennas. Entries of thechannel matrices are generated from i.i.d. zero mean Gaussian distribution with unit variance. The noise covariance is an identity matrix. As shown in Figure , the dual link algorithm and the iterative water-filling algorithm all converge to the same sum-capacity. The dual link algorithm converges significantlyfaster than the iterative water-filling algorithm.VI. C OMPLEXITY A NALYSIS
We have numerically evaluated the convergence properties of the proposed new algorithm, the PWFalgorithm and the WMMSE algorithm in terms of the number of iterations. We now analyze the complexityper iteration for these algorithms.Recall that L is the number of users or links, and for simplicity, assume that each user has N transmitand N receive antennas, so the resulting Σ l (and ˆ Σ l ) is a N × N matrix. Suppose that we use thestraightforward matrix multiplication and inversion. Then the complexity of these operations are O ( N ) .For the dual link algorithm, at each iteration, the calculation of Ω l incurs a complexity of O ( LN ) andthe calculation of Ω l + H l,l Σ ( n +1) l H † l,l incurs a complexity of O ( LN ) . To obtain ˆ Σ l , we have to inverttwo N × N matrices, which incurs a complexity of O ( N ) . Therefore, the total complexity for calculatinga ˆ Σ l is O ( LN ) , and the complexity of generating all ˆ Σ is O ( L N ) . As calculating Σ incurs the samecomplexity as calculating ˆ Σ , the complexity of the new algorithm is O ( L N ) for each iteration.The PWF algorithm uses the same calculation to generate Ω l and incurs a complexity of O ( LN ) foreach Ω l . Then, it uses the singular value decomposition of Ω − l H l,l ˆ Ω − l , which incurs a complexity of O ( N ) . Since we need L of these operations, the total complexity of the PWF algorithm is O ( L N ) .For the WMMSE algorithm, it is shown in [12] that its complexity is O ( L N ) .So, all three algorithms have the same computational complexity per iteration if we use O ( N ) matrixmultiplication. Recently, Williams [13] presents an O ( N . ) matrix multiplication and inversion method.If we use this algorithm, then the new algorithm and the WMMSE algorithm have O ( L N . ) complexitysince the N factor comes from the matrix multiplication and inversion. However, in addition to L numberof matrix multiplications and inversions, the PWF algorithm has L number of N by N matrix singularvalue decompositions. Therefore the complexity of the PWF algorithm is O ( L N . + LN ) .VII. C ONCLUSION
We have proposed a new algorithm, the dual link algorithm, to solve the weighted sum-rate maximizationproblem in general interference channels. Based on the polite water-filling results and the rate duality[10], the proposed dual link algorithm updates the transmit signal covariance matrices in the forward and reverse links in a symmetric manner and has fast and guaranteed convergence. We have given a proof forits monotonic convergence, and the proof technique may be generalized for other problems. Numericalexamples have demonstrated that the new algorithm has a convergence speed close to the fastest PWFalgorithm which is however not guaranteed to converge in all situations. The dual link algorithm is scalableand well suited for distributed implementation. It can also be easily modified to accommodate colorednoise. R EFERENCES [1] V.R. Cadambe and S.A. Jafar. Interference Alignment and Degrees of Freedom of the K-User Interference Channel.
IEEE Transactionson Information Theory , 54(8):3425–3441, August 2008.[2] L. Chen and S. You. The weighted sum rate maximization in MIMO interference networks: The minimax lagrangian duality andalgorithm.
IEEE Trans. Networking , submitted, Sep., 2013.[3] H. Huh, H. Papadopoulos, and G. Caire. MIMO broadcast channel optimization under general linear constraints. In
Proc. IEEE Int.Symp. on Info. Theory (ISIT) , 2009.[4] H. Huh, H. C. Papadopoulos, and G. Caire. Multiuser MISO transmitter optimization for intercell interference mitigation.
IEEETransactions on Signal Processing , 58(8):4272 –4285, August 2010.[5] N. Jindal, W. Rhee, S. Vishwanath, S. A. Jafar, and A. Goldsmith. Sum power iterative water-filling for multi-antenna Gaussianbroadcast channels.
IEEE Trans. Info. Theory , 51(4):1570–1580, April 2005.[6] Seung-Jun Kim and G.B. Giannakis. Optimal resource allocation for MIMO ad hoc cognitive radio networks.
IEEE Trans. Info. Theory ,57(5):3117 – 3131, May 2011.[7] Xing Li, Lijun Chen, Behrouz Touri, Xinming Huang, and Youjian Eugene Liu. Distributed Algorithms for Weighted Sum-rateMaximization in MIMO Interference Networks. to be submitted, 2016.[8] An Liu, Youjian Liu, Haige Xiang, and Wu Luo. MIMO B-MAC interference network optimization under rate constraints by politewater-filling and duality.
IEEE Trans. Signal Processing , 59(1):263 –276, January 2011.[9] An Liu, Youjian Liu, Haige Xiang, and Wu Luo. Polite water-filling for weighted sum-rate maximization in B-MAC networks undermultiple linear constraints.
IEEE Trans. Signal Processing , 60(2):834 –847, February 2012.[10] An Liu, Youjian Liu, Haige Xiang, and Wu Luo. Duality, polite water-filling, and optimization for MIMO B-MAC interference networksand iTree networks.
IEEE Trans. Info. Theory , in revision, submitted Apr. 2010.[11] C. Shi, R. A. Berry, and M. L. Honig. Monotonic convergence of distributed interference pricing in wireless networks. in Proc. IEEEISIT, Seoul, Korea , June 2009.[12] Qingjiang Shi, M. Razaviyayn, Zhi-Quan Luo, and Chen He. An iteratively weighted mmse approach to distributed sum-utilitymaximization for a mimo interfering broadcast channel.
IEEE Trans. Signal Processing , 59(9):4331 –4340, September 2011.[13] Virginia Vassilevska Williams. Multiplying matrices faster than coppersmith-winograd. In
Proceedings of the 44th symposium onTheory of Computing , pages 887–898. ACM, 2012.[14] Seungil You, Lijun Chen, and Y.E. Liu. Convex-concave procedure for weighted sum-rate maximization in a MIMO interferencenetwork. In , pages 4060–4065, December 2014.[15] W. Yu. Uplink-downlink duality via minimax duality.
IEEE Trans. Info. Theory , 52(2):361–374, 2006. [16] W. Yu, W. Rhee, S. Boyd, and J.M. Cioffi. Iterative water-filling for Gaussian vector multiple-access channels. IEEE Trans. Info.Theory , 50(1):145–152, 2004.[17] Wei Yu. Sum-capacity computation for the gaussian vector broadcast channel via dual decomposition.
IEEE Trans. Inform. Theory ,52(2):754–759, February 2006.[18] Wei Yu. Multiuser water-filling in the presence of crosstalk.
Information Theory and Applications Workshop, San Diego, CA, U.S.A ,pages 414 –420, 29 2007-feb. 2.[19] Lan Zhang, Ying-Chang Liang, Yan Xin, Rui Zhang, and H.V. Poor. On gaussian MIMO BC-MAC duality with multiple transmitcovariance constraints. In
Proc. IEEE Int. Symp. on Info. Theory (ISIT) , pages 2502 –2506, June 2009.[20] Lan Zhang, Yan Xin, and Ying-chang Liang. Weighted sum rate optimization for cognitive radio MIMO broadcast channels.