Message Passing in C-RAN: Joint User Activity and Signal Detection
Yuhao Chi, Lei Liu, Guanghui Song, Chau Yuen, Yong Liang Guan, Ying Li
11 Message Passing in C-RAN: Joint User Activityand Signal Detection
Yuhao Chi ∗ , Lei Liu †‡ , Guanghui Song § , Chau Yuen † , Yong Liang Guan (cid:107) , and Ying Li ∗∗ State Key Lab of ISN, Xidian University, China, † Singapore University of Technology and Design, Singapore, ‡ City University of Hong Kong, China, § Doshisha University, Kyoto, Japan, (cid:107)
Nanyang Technological University, Singapore
Abstract —In cloud radio access network (C-RAN), remoteradio heads (RRHs) and users are uniformly distributed in alarge area such that the channel matrix can be considered assparse. Based on this phenomenon, RRHs only need to detect therelatively strong signals from nearby users and ignore the weaksignals from far users, which is helpful to develop low-complexitydetection algorithms without causing much performance loss.However, before detection, RRHs require to obtain the real-time user activity information by the dynamic grant procedure,which causes the enormous latency. To address this issue, inthis paper, we consider a grant-free C-RAN system and proposea low-complexity Bernoulli-Gaussian message passing (BGMP)algorithm based on the sparsified channel, which jointly detectsthe user activity and signal. Since active users are assumed totransmit Gaussian signals at any time, the user activity canbe regarded as a Bernoulli variable and the signals from allusers obey a Bernoulli-Gaussian distribution. In the BGMP, thedetection functions for signals are designed with respect to theBernoulli-Gaussian variable. Numerical results demonstrate therobustness and effectivity of the BGMP. That is, for differentsparsified channels, the BGMP can approach the mean-squareerror (MSE) of the genie-aided sparse minimum mean-squareerror (GA-SMMSE) which exactly knows the user activity in-formation. Meanwhile, the fast convergence and strong recoverycapability for user activity of the BGMP are also verified.
Index Terms —C-RAN, Bernoulli-Gaussian, message passing,user activity and signal detection.
I. I
NTRODUCTION
To support massive data demands in wireless communi-cations, cloud radio access network (C-RAN) emerges asa candidate for the next generation network architecture,which can significantly improve spectral efficiency and energyefficiency [1]–[3]. Unlike traditional multiuser multiple-inputmultiple-output (MU-MIMO) systems, C-RAN consists ofhundreds of remote radio heads (RRHs) deployed in a largearea and a pool of baseband units (BBUs) centralized in a datacloud center. All RRHs collect signals from users and mergeall signals to BBUs for signal recovery.In order to reliably recover signals with low complexity, apromising detection method is the message passing algorithm
This work was supported in part by the National Natural Science Foundationof China under Grants 61671345, in part by the Singapore A*STAR SERCProject under Grant 142 02 00043, in part by the Japan Society for thePromotion of Science through the Grant-in-Aid for Scientific Research (C)under Grant 16K06373, and in part by the Ministry of Education, Culture,Sports, Science and Technology through the Strategic Research Foundation atPrivate Universities (2014-2018) under Grant S1411030. The first author wasalso supported by the China Scholarship Council under Grant 201606960042. based on factor graph [4]–[7], which transforms the optimalcost function for signal recovery into the iterative calculationsamong nodes in the factor graph. For different networks, themessage passing algorithm need to be specially designed.In C-RAN, affected by the path loss, the signals from farusers are very weak when arrive at RRHs, which results in anearly sparse channel. Authors in [8] proved that the C-RANchannel could be sparsified without causing much performanceloss, where RRHs only needed to detect the relatively strongsignals from nearby users and ignored the weak signals fromfar users. The channel sparsification is helpful to developlow-complexity message passing algorithms. Nevertheless, dueto different statistic distributions of channels, the Gaussianmessage passing (GMP) algorithms proposed for the MU-MIMO [9]–[11] cannot be directly extended to the C-RAN.As a result, authors in [12], [13] proposed a sparse messagepassing algorithm for the C-RAN with the sparsified chan-nel [8].However, in the above works [8]–[13], receivers are as-sumed to exactly know the real-time user activity informationand then detect the signals for active users. In practice,the user activity information is obtained by the complicatedgrant procedure. When the number of users is large andthe activity of each user changes at any time, the dynamicgrant procedure causes the enormous latency. To addressthis issue, authors in [14], [15] considered a grant-free C-RAN system and proposed a modified Bayesian compressivesensing and a hybrid generalized approximate message passing(GAMP) respectively, which were to estimate the channelstate information and user activity. However, in [14], [15],the channel models do not take into account the geographicaldistributions of RRHs and users and these algorithms do notconsider the signal recovery for active users.In this paper, we consider joint user activity and signaldetection over the grant-free C-RAN. Since the activity of eachuser changes at any time, the user activity can be regardedas a Bernoulli variable at RRHs. Moreover, we assume thatactive users transmit Gaussian signals and the transmissionsof inactive users can be treated as zeros for RRHs. Statis-tically, the signals from all users obey a Bernoulli-Gaussiandistribution. Therefore, based on the sparsified channel andcorresponding factor graph, we propose a low-complexityBernoulli-Gaussian message passing (BGMP) algorithm tojointly detect the user activity and signal. In the BGMP,messages passing among nodes and relevant update functions a r X i v : . [ c s . I T ] A ug at nodes are associated with the Bernoulli-Gaussian variable.Numerical results demonstrate the robustness and effectivityof the BGMP. That is, for different sparsified channels, theBGMP can approach the mean-square error (MSE) of thegenie-aided sparse minimum mean-square error (GA-SMMSE)which exactly knows the user activity information. Moreover,the fast convergence and strong recovery capability for useractivity of the BGMP are also verified.II. S YSTEM M ODELFig. 1. Illustration of an uplink grant-free C-RAN system, where RRHs andusers are uniformly located over a large coverage area.
Figure 1 shows an uplink grant-free C-RAN system with M RRHs and K users uniformly located over a large coveragearea, where active users transmit signals at any time withoutthe complicated grant procedure. Each RRH has N antennasand each user has one antenna. Signal y m ∈ R N × arrived atthe m -th RRH is y m = P H m x + z m , m = 1 , ..., M (1)where H m ∈ R N × K denotes the channel matrix from K usersto the m -th RRH, P is the transmit power allocated to eachuser, x ∈ R K × is the transmitted signal from K users, and z m ∈ R N × is a Gaussian noise vector obeying N (0 , σ n I N ) with an N × N identity matrix I N . The ( n, k ) -th entry h mn,k of H m is assumed as γ mn,k d − αm,k , where γ mn,k is an independentand identically distributed (i.i.d.) fading coefficient obeying N (0 , /K ) , d m,k is the geographic distance between the k -thuser and the m -th RRH, and α is a path loss exponent. Notethat d − αm,k denotes the path loss from the k -th user to the m -thRRH. Here, we assume that the m -th RRH perfectly knowschannel state information H m .Due to the effect of path loss, the received signals from farusers are drastically degraded such that RRHs can ignore thedetections for far users without causing much performanceloss [8]. The channel sparsification can provide a sparsefactor graph to develop low-complexity message passing al-gorithms [12], [13]. Therefore, as [12], [13], we set a distancethreshold d to sparsify the channel in Fig. 1. Specifically, the ( n, k ) -th entry ˆ h mn,k of sparsified channel matrix ˆ H m is ˆ h mn,k = (cid:26) h mn,k , d m,k < d , , otherwise . Then, Eq. (1) is rewritten as y m = P ˆ H m x + P ˜ H m x + z m = P ˆ H m x + η m , (2)where ˜ H m = H m − ˆ H m and η m = P ˜ H m x + z m is an inter-ference vector of length N . The variance of the n -th entry η mn of η m is σ mn = P E [ (cid:80) k | ˜ h mn,k x k | ] + σ n , where ˜ h mn,k is the( n , k )-th entry of ˜ H m , x k is the k -th entry of x , k = 1 , ..., K and n = 1 , ..., N .Note that in the grant-free C-RAN, RRHs cannot obtain theuser activity information in advance. Thus, the user activitycan be regarded as a Bernoulli variable at the RRHs. Moreover,we assume that active users transmit Gaussian signals. Thetransmissions of inactive users can be treated as zeros forRRHs. Statistically, entries of x obey a Bernoulli-Gaussiandistribution. That is, x k = (cid:26) , with probability 1 − ρ, N (0 , ρ − ) , with probability ρ, where < ρ < is the probability of user activity and thepower of x k is normalized to . Our goal is to develop a low-complexity message passing algorithm based on the sparsifiedchannel, which jointly detects the user activity and signal.III. B ERNOULLI -G AUSSIAN M ESSAGE P ASSING g RRH 1 k g K g k K x k x K x y N N p p N y RRH M M y MN MN p M M p MN y Sparsified Channel
Gaussian NodeBernoulli Node + + + + + Sum Node = = = Noise Node = Variable Node
Fig. 2. Factor graph of the grant-free C-RAN with the sparsified channel.There are M multiple-antenna RRHs in which each RRH has N antennasdenoted as sum nodes, and K single-antenna users denoted as variablenodes. Bernoulli vector λ = [ λ , ..., λ K ] T and Gaussian signal vector g = [ g , ..., g K ] T are denoted as Bernoulli and Gaussian nodes respectively. To identify active users and recover their signals, we pro-pose a Bernoulli-Gaussian message passing (BGMP) algo-rithm for the C-RAN with the sparsified channel. To sim-plify the analysis, Bernoulli-Gaussian vector x is transformedinto the componentwise product of a Bernoulli vector λ obeying i.i.d. B (1 , ρ I K ) and a Gaussian vector g obeyingi.i.d. N (0 , ρ − I K ) , i.e., x = λ ◦ g , where λ and g are independent of each other and ◦ refers tothe element-wise multiplication. Thus, the recovery for x istransformed into the joint recovery for λ and g . Fig. 2 showsthe factor graph of the C-RAN, where antennas of all RRHs,users, λ , and g are denoted as sum, variable, Bernoulli, andGaussian nodes respectively.As the signal detection in conventional message passingalgorithms, such as GMP algorithm [9] and belief propagation + mn p mn y g k g K g ,1 ˆ mn h mn x , ˆ mn k h , ˆ mn K h k x K x k K mn p mn y mn , ˆ mn k h k x + n p n y n Mn p Mn y Mn ˆ n k h , ˆ Mn k h + + k g k (a) Sum Node Updating (b) Variable Node Updating p k p K p g p k g p K g p k p k g p Fig. 3. Bernoulli-Gaussian message updates between the n -th sum node of the m -th RRH and the k -th variable node including Bernoulli node λ k andGaussian node g k , m = 1 , ..., M , n = 1 , ..., N , and k = 1 , ..., K . Messages along edges consist of non-zero probability of λ k and mean and variance of g k . (BP) decoding of LDPC code [16], in the proposed BGMPalgorithm, we decompose the global calculation based on thefull channel matrix into many local calculations at nodes inthe factor graph. This is efficient to reduce the computationcomplexity. Note that the messages in GMP or BP decodingrelate to Gaussian or discrete signals. Different from GMPand BP decoding, the messages in the BGMP are associatedwith both Bernoulli and Gaussian signals. Fig. 3 illustratesthe update processes of messages between sum nodes andvariable nodes in the BGMP algorithm. To be specific, wepresent message updates at sum and variable nodes as follows. A. Bernoulli-Gaussian Message Update at Sum Node
For simplicity, we consider the detection for user k atthe m -th RRH, where user k is nearby the m -th RRH, i.e., ˆ h mn,k = 1 , n = 1 , ..., N . Fig. 3(a) shows the update process formessages passing from the n -th sum node of the m -th RRHto the k -th variable node. Then, we rewrite Eq. (2) as y mn = ˆ h mn,k λ k g k + (cid:88) i ∈K\ k ˆ h mn,i λ i g i + η mn = ˆ h mn,k λ k g k + ˆ η mnk , where ˆ η mnk = (cid:80) i ∈K\ k ˆ h mn,i λ i g i + η mn , K = { , ..., K } , and i ∈K \ k denotes that i ∈ K and i (cid:54) = k . Due to independenttransmissions of all users, based on the central limit theorem, ˆ η mnk can be regarded as a Gaussian variable with mean e mnk and variance v mnk . At the t -th iteration, e mnk ( t ) = E [ˆ η mnk ( t )] = (cid:88) i ∈K\ k ˆ h mn,i p mi → n ( t ) e mi → n ( t ) , (3) v mnk ( t ) = V ar [ˆ η mnk ( t )] (4) = (cid:80) i ∈K\ k (ˆ h mn,i ) p mi → n ( t ) (cid:0) v mi → n ( t ) + (1 − p mi → n ( t )) e mi → n ( t ) (cid:1) + σ mn , where E [ a ] and V ar [ a ] denote the expectation and variance ofvariable a , e mi → n ( t ) and v mi → n ( t ) are the mean and variance of g i , and p mi → n ( t ) is the non-zero probability of λ i . These inputmessages associated with g i and λ i are from the i -th variablenode. Based on these priori inputs, the n -th sum node of the m -th RRH outputs mean e mn → k ( t ) and variance v mn → k ( t ) for g k , and non-zero probability p mn → k ( t ) for λ k , which are sentto the k -th variable node.
1) Gaussian message update for g k ∼N ( e mn → k ( t ) , v mn → k ( t )) : e mn → k ( t ) = E [ g k | y mn , ˆ η mnk , λ k = 1]= (ˆ h mn,k ) − ( y mn − e mnk ( t )) , (5) v mn → k ( t ) = V ar [ g k | y mn , ˆ η mnk , λ k = 1]= (ˆ h mn,k ) − v mnk ( t ) , (6)where E [ a | b ] and V ar [ a | b ] denote the conditional expecta-tion and variance of variable a when given variable b andEq. (5) and Eq. (6) are derived from the fact that λ k and g k are independent of each other. Let initial mean vec-tor e mn (0) = [ e m → n (0) , ..., e mK → n (0)] T and variance vector v mn (0) = [ v m → n (0) , ..., v mK → n (0)] T be and + ∞ respec-tively, where and + ∞ denote the vector forms of and + ∞ .
2) Bernoulli message update for λ k : p mn → k ( t ) = (cid:2) P ( y mn | λ k = 0 , ˆ η mnk ) P ( y mn | λ k = 1 , ˆ η mnk ) (cid:3) − = 11 + f ( y mn , e mnk ( t ) , v mnk ( t )) f ( y mn , ˆ h mn,k e mk → n ( t )+ e mnk ( t ) , (ˆ h mn,k ) v mk → n ( t )+ v mnk ( t )) , (7)where f ( y, a, b ) is the standard Gaussian probability densityfunction (PDF) with respect to variable y whose mean is a andvariance is b . Let initial non-zero probability vector p mn (0) =[ p m → n (0) , ..., p mK → n (0)] T be = 0 . × , where denotes theall-ones vector. B. Bernoulli-Gaussian Message Update at Variable Node
As shown in Fig. 3(b), we present the update process formessages passing from the k -th variable node to the n -thsum node of the m -th RRH. At first, let ¯ e = [¯ e , ..., ¯ e K ] T , ¯ v = [¯ v , ..., ¯ v K ] T be the priori mean and variance of g , and ¯ p = [¯ p , ..., ¯ p K ] T be the priori non-zero probability of λ .We assume that ¯ e k = 0 , ¯ v k = ρ − , and ¯ p k = ρ , k ∈ K .Then, based on the estimated messages from all sum nodes,at the ( t + 1 )-th iteration, the k -th variable node outputs mean e mk → n ( t + 1) and variance v mk → n ( t + 1) for g k , and non-zeroprobability p mk → n ( t +1) for λ k , which are sent to the n -th sumnode of the m -th RRH.
1) Gaussian message update for g k ∼N ( e mk → n ( t + 1) , v mk → n ( t + 1)) : According to the update rules of Gaussian message [9]–[11], PDF of the output Gaussian message froma variable node is the normalized product of PDFs of the inputGaussian messages. Therefore, we can obtain v mk → n ( t + 1) = V ar [ g k | V k \{ m,n } ( t ) , ¯ v k ]= (cid:2) ¯ v − k + (cid:88) i ∈M\ m (cid:88) j ∈D v ij → k ( t ) − + (cid:88) d ∈D\ n v md → k ( t ) − (cid:3) − , (8) e mk → n ( t + 1) = E [ g k | V k \{ m,n } ( t ) , E k \{ m,n } ( t ) , ¯ v k , ¯ e k ]= v mk → n ( t + 1) (cid:2) ¯ e k ¯ v k + (cid:88) i ∈M\ m (cid:88) j ∈D e ij → k ( t ) v ij → k ( t ) + (cid:88) d ∈D\ n e md → k ( t ) v md → k ( t ) (cid:3) , (9)where V k ( t )=[ v ij → k ( t )] MN × and E k ( t ) = [ e ij → k ( t )] MN × denote the mean and variance vectors associated with g k from all sum nodes, i ∈ M , j ∈ D , M = { , ..., M } , D = { , ..., N } , and \{ m, n } denotes that j (cid:54) = n if and onlyif i = m .
2) Bernoulli message update for λ k : By combining non-zero probability P k ( t ) = [ p ij → k ( t )] MN × associated with λ k from all sum nodes, where i ∈ M and j ∈ D , we can obtain p mk → n ( t + 1) = (cid:2) P ( λ k = 0 | P \{ m,n } ( t ) , ¯ p k ) P ( λ k = 1 | P \{ m,n } ( t ) , ¯ p k ) (cid:3) − = 11+ (1 − ¯ p k )[ (cid:81) i ∈M\ m (cid:81) j ∈D (1 − p ij → k ( t ))] (cid:81) d ∈D\ n (1 − p md → k ( t ))¯ p k [ (cid:81) i ∈M\ m (cid:81) j ∈D p ij → k ( t )] (cid:81) d ∈D\ n p md → k ( t ) . (10)Considering a large amount of probability multiplications inEq. (10) easily cause the storage overflow in the simulations,we transform probability calculations into log-likelihood ratio(LLR) calculations by using function L ( p ) = log p − p = − log( p − − . Specifically, we denote the LLR forms of ¯ p k and p mk → n ( t ) as ¯ (cid:96) k = L (¯ p k ) and (cid:96) mk → n ( t ) = L ( p mk → n ( t )) respectively. Then, Eq. (10) is transformed into (cid:96) mk → n ( t + 1) = ¯ (cid:96) k + (cid:88) i ∈M\ m (cid:88) j ∈D (cid:96) ij → k ( t )+ (cid:88) d ∈D\ n (cid:96) md → k ( t ) , (11)where initialization of L mn (0) = L ( p mn (0)) is . Correspond-ingly, input message p mk → n ( t ) of Eq. (3) and Eq. (4) is equalto [tanh( (cid:96) mk → n ( t )2 ) + 1] / . C. Decision and Output of BGMP
The BGMP algorithm is performed iteratively between thesum nodes and variable nodes, where Eq. (5)–Eq. (7) are theupdate functions for messages at sum nodes, and Eq. (8)–Eq. (11) are the update functions for messages at variablenodes. The whole iterative process is stop until when the presetmaximum iterative number is reached or the MSE requirementis satisfied. According to the message passing rules [9], [16],the decision depends on the full messages at the Gaussianand Bernoulli nodes which combine priori messages and inputmessages from sum nodes together. The full messages of meanand variance of g are denoted as ˜ e and ˜ v respectively, andthose of non-zero probability and the corresponding LLR of λ are denoted as ˜ P and ˜ (cid:96) respectively. The k -th entries of ˜ e , ˜ v , ˜ P , and ˜ (cid:96) , k ∈ K , are ˜ v k = [¯ v − k + (cid:88) i ∈M (cid:88) j ∈D v ij → k ( t ) − ] − , (12) ˜ e k = ˜ v k [¯ e k ¯ v − k + (cid:88) i ∈M (cid:88) j ∈D e ij → k ( t ) v ij → k ( t ) − ] , (13) ˜ (cid:96) k = ¯ (cid:96) k + (cid:88) i ∈M (cid:88) j ∈D (cid:96) ij → k ( t ) , (14) ˜ p k = [tanh( ˜ (cid:96) k / , (15)Based on Eq. (12)–Eq. (15), the k -th entry of final estimation ˜ λ of λ is ˜ λ k = (cid:26) , when ˜ (cid:96) k > , , when ˜ (cid:96) k ≤ , and the final estimation ˜ x of x is ˜ x = ˜ λ ◦ ˜ P ◦ ˜ e . D. Complete BGMP Algorithm
Now we present the complete process of BGMP algorithm.Assume i ∈ M , j ∈ D , and k ∈ K . Let ˆ H = [ ˆ H , ..., ˆ H M ] T = [ˆ h ij,k ] MN × K be the whole matrix. We define J ( i ) as the set of neighbors of the i -th node, whichdenotes that there is a edge connecting the i -th nodeand any d -th node, d ∈ J ( i ) , i.e., ˆ h ij,d (cid:54) = 0 . Moreover,let y = [ y ij ] MN × , E η ( t ) = [ e mnk ( t )] MN × K , V η ( t ) =[ v mnk ( t )] MN × K , σ η =[ σ mn ] MN × , E s ( t ) = [ e ij → k ( t )] MN × K , V s ( t )=[ v ij → k ( t )] MN × K , P s ( t )=[ p ij → k ( t )] MN × K , L s ( t )= [ (cid:96) ij → k ( t )] MN × K , E v ( t ) = [ e ik → j ( t )] K × MN , V v ( t ) =[ v ik → j ( t )] K × MN , P v ( t ) = [ p ik → j ( t )] K × MN , L v ( t ) =[ (cid:96) ik → j ( t )] K × MN . The output of function sign ( a ) is equal to when a > and when a ≤ . The complete BGMPalgorithm is given in Algorithm 1.IV. N UMERICAL R ESULTS
In this section, we investigate the performance of theproposed BGMP algorithm over the grant-free C-RAN system.We assume that the RRHs and users are uniformly located overa square network whose side length is km. The path loss ex-ponent α is . . The number of RRHs and users is M = 120 and K = 200 , where each RRH has N = 10 antennas and theprobability of user activity ρ = 0 . . The maximum iterationnumber of the BGMP algorithm is τ max = 50 . The averagereceive signal-to-noise ratio (RSNR) is P E [ (cid:80) m ∈M || H m || ] MNσ n .We measure the performances in terms of the average MSEand user state error (USE), i.e., MSE = K E [ || x − ˜ x || ] andUSE = K E [ || λ − ˜ λ || ] . A. Benchmark Detections
To evaluate recovery accuracy of the BGMP algorithm,we present three benchmark detections: genie-aided minimummean-square error (GA-MMSE), genie-aided sparse MMSE(GA-SMMSE), and general SMMSE, where the genie aiddenotes that the detector knows the non-zero locations of x in advance and the sparseness denotes that the detector Algorithm 1
Bernoulli-Gaussian Message Passing (BGMP) Input: y , ˆ H , σ η , ¯ e , ¯ v , ¯ p , and ρ ∈ (0 , . Initialization: t = 0 , E v (0) = , V v (0) = + ∞ , and L v (0) = . Repeat: set t ⇐ t + 1 , for i = 1 , ...MN , k ∈ J ( i ) , do P vk,i ( t ) = [tanh( L vk,i ( t ) /
2) + 1] / , end for i = 1 , ...MN , k ∈ J ( i ) , do Define U i = (cid:80) k U i,k = (cid:80) k ˆ H i,k P vk,i ( t ) E vk,i ( t ) , W i = (cid:80) k W i,k = (cid:80) k ( ˆ H i,k ) P vk,i ( t ) (cid:2) ( V vk,i ( t )+(1 − P vk,i ( t )) E vk,i ( t ) (cid:3) , (cid:20) E η i,k ( t ) V η i,k ( t ) (cid:21) = (cid:20) U i − U i,k ,W i − W i,k + σ η i,k (cid:21) , E s i,k ( t ) V s i,k ( t ) L s i,k ( t ) = ( ˆ H i,k ) − ( Y i − E η i,k ( t ))( ˆ H i,k ) − V η i,k ( t ) − log (cid:2) ˆ H i,k V vk,i ( t ) V η i,k ( t ) (cid:3) + ˆ H i,k V vk,i ( t )( y i − E η i,k ( t )) V η i,k ( t )[ V η i,k ( t )+ ˆ H i,k V vk,i ( t )] + ˆ H i,k E vk,i ( t ) V η i,k ( t ) (cid:2) y i − E η i,k ( t )) − ˆ H i,k E vk,i ( t ) (cid:3) V η i,k ( t )[ V η i,k ( t )+ ˆ H i,k V vk,i ( t )] , end for k = 1 , ..., K , i ∈ J ( k ) , do Define V k = (cid:80) i V k,i = (cid:80) i V s i,k ( t ) − , E k = (cid:80) i E k,i = (cid:80) i E s i,k ( t ) V s i,k ( t ) − , L k = (cid:80) i L k,i = (cid:80) i L si,k ( t ) , V v k,i ( t + 1) E v k,i ( t + 1) L v k,i ( t + 1) = (cid:2) ( ¯ v k,i ) − + V k − V k,i (cid:3) − V v k,i ( t + 1) (cid:2) ¯ e k ¯ v − k + E k − E k,i (cid:3) ¯ (cid:96) k,i + L k − L k,i , end Until: stopping criteria are satisfied for k = 1 , ..., K , do ˜ v k ˜ e k ˜ (cid:96) k ˜ p k = (cid:2) ( ¯ v k ) − + (cid:80) i ∈J ( k ) V s i,k ( t ) − (cid:3) − ˜ v k (cid:2) ¯ e k / ¯ v − k + (cid:80) j ∈J ( k ) E s j,k ( t ) V s j,k ( t ) − (cid:3) ¯ (cid:96) k + (cid:80) j ∈J ( k ) L sj,k ( t )[tanh( ˜ (cid:96) k /
2) + 1] / , , end Output: ˜ λ = sign( ˜ (cid:96) ) , and ˜ x = ˜ λ ◦ ˜ P ◦ ˜ e exploits sparsified matrix ˆ H instead of original matrix H .The estimations of these detectors are x GA − MMSE \{ } = (cid:0) H T \{ } H \{ } + σ n ρ I (cid:1) − H T \{ } Y , x GA − SMMSE \{ } = (cid:0) ˆ H T \{ } ˆ H \{ } + ρ σ η \{ } I (cid:1) − ˆ H T \{ } Y , x SMMSE = (cid:0) ˆ H T ˆ H + ρ σ η I (cid:1) − ˆ H T Y , where \{ } denotes that the entries with respect to the zeroslocations of x are excluded. Since the GA-MMSE and GA-SMMSE exactly know the non-zeros locations of x , theycan provide the ideal limit and lower-bound performancesrespectively. In contrast, the SMMSE only makes use of ˆ H so that it provides the upper-bound performance. B. Complexity Comparison
Note that the realizable SMMSE detector has a computa-tional complexity of O ( K + K M N + KM N ) . In contrast,we discuss the computational complexity of the BGMP. Bydefining channel sparsity γ of ˆ H as the ratio of the numberof non-zero entries to that of all entries, the number of edgesin the factor graph is γM N K . In each iteration, the message update along one edge requires multiplication/division, and exponent/logarithm operations. Therefore, the computationalcomplexity of the proposed BGMP is O ( γM N Kτ max ) . As γ decreases, the BGMP can achieve very low complexity. C. MSE Performance Comparison
In Fig. 4, we give the MSEs of the proposed BGMP, GA-MMSE, GA-SMMSE, and SMMSE over the C-RAN system,where distance threshold d = 3 . km and channel sparsity γ = 0 . . Note that the MSE curve of the BGMP is very closeto that of GA-MMSE and GA-SMMSE at the entire range ofRSNRs and the gap between the MSE curves of the BGMP andGA-SMMSE is less than dB at the high RSNRs. Comparedwith the SMMSE, the BGMP has about dB performance gain.Moreover, we also present the MSEs of GAMP [6] and basispursuit de-noising (BPDN) [17]. It is noticed that the GAMPstill achieves high MSE even when the RSNR is high andMSE of BPDN just converges that of SMMSE, where τ max for GAMP and BPDN is also . D. BGMP Convergence and USE Performance
Fig. 5 shows that USEs of the BGMP with different RSNRsand iterations, where the simulation conditions are the sameas in section IV-C. Note that for each RSNR, the BGMP onlytakes less than iterations to converge, which illustrates thefast convergence of the BGMP. Additionally, as the RSNRincreases, the USE of the BGMP is as low as × − . E. Effect of Channel Sparsity
To investigate the robustness of the BGMP, in Fig. 6we provide the MSEs of the BGMP, GA-MMSE, and GA-SMMSE over the C-RAN with different channel sparsity andRSNRs. By directly changing the values of d , γ changes froma small value near to accordingly. Fig. 6 shows that foreach RSNR, the MSE of the BGMP is close to that of GA-SMMSE at the entire range of γ . In addition, as the RSNRincreases, the gaps between the MSE curves of GA-MMSEand GA-SMMSE become large. The reason is that eligible d increases with the RNSR [8], which results in the increases ofeligible γ .Moreover, we present the USEs of the BGMP with different γ and RSNRs. Fig. 6 illustrates that for a given RSNR, γ almost does not affect the USE performance for the BGMP.Furthermore, in the random network, the distance betweeneach user and each RRH is different so that the variancesof entries of ˆ H are different. As a result, the entries of ˆ H are independent but differently distributed. Fig. 6 andFig. 7 further verify that the BGMP is robust to the statisticaldistribution of channel.V. C ONCLUSION
In this paper, we proposed a low-complexity Bernoulli-Gaussian message passing (BGMP) algorithm for the grant-free C-RAN system. Based on the sparsified channel, theBGMP could jointly detect user activity and signal withlow complexity. Numerical results showed that for different
10 20 30 40 50 60 7010 −6 −5 −4 −3 −2 −1 RSNR (dB) M S E BGMP (proposed)GAMP [5]BPDN [15]SMMSE (upper bound)GA−SMMSE (lower bound)GA−MMSE (limit) gap ≈ Fig. 4. MSE Curves of the proposed BGMP, GA-MMSE, GA-SMMSE,SMMSE, GAMP [6] and BPDN [17]. GA-MMSE, GA-SMMSE, and SMMSEprovide the limit, lower-bound, and upper-bound performances respectively.The MSE of the proposed BGMP approaches that of the GA-SMMSE atavailable RSNRs and has performance loss just within dB at high RSNRs. Iteration U S E BGMPRSNR=65dBRSNR=35dBRSNR=25dBRSNR=15dB
Fig. 5. USE curves of the proposed BGMP with different RSNRs anditerations. For each RSNR, the BGMP only takes less iterations to converge. −6 −5 −4 −3 −2 −1 Channel Sparsity γ M S E BGMP (proposed)GA−SMMSE (lower bound)GA−MMSE (limit)RSNR=15dB, gap=1 × −1 RSNR=35dB, gap=3.1 × −3 RSNR=65dB, gap=4.4 × −6 RSNR=45dB, gap=2.4 × −4 Fig. 6. MSE Curves of the proposed BGMP, GA-MMSE (limit) and GA-SMMSE (low bound) over the sparsified C-RAN with different channelsparsity γ and RSNRs. For each RNSR, MSE curves of the proposed BGMPapproach that of GA-SMMSE at the entire range of γ . sparsified channels, the BGMP took less iterations to approachthe MSE of the GA-SMMSE and low USEs. In the future Channel Sparsity γ U S E BGMPRSNR=45dBRSNR=35dBRSNR=25dBRSNR=15dBRSNR=65dB
Fig. 7. USE curves of the proposed BGMP with different channel sparsity γ and RSNRs. For each RSNR, the USE performances of the BGMP are robustto γ which changes from a value near to . work, we will provide the convergence analysis for the BGMP.R EFERENCES[1] “C-RAN: The road towards green RAN,” China Mobile Res. Inst.,Beijing, China, Oct. 2011, White Paper, ver. 2.5.[2] C. Fan, Y. Zhang, and X. Yuan, “Advances and challenges toward ascalable cloud radio access network”,
IEEE Commun. Magazine , vol. 54,no. 6, pp. 29-35, Jun. 2016.[3] J. Zuo, J. Zhang, C. Yuen, W. Jiang, and W. Luo, “Energy efficient userassociation for cloud radio access networks”,
IEEE Access , vol. 4, pp.2429-2438, May 2016.[4] H. A. Loeliger, J. Dauwels, J. Hu, S. Korl, L. Ping, and F. R. Kschischang,“The factor graph approach to model-based signal processing,”
Proc.IEEE , vol. 95, no. 6, pp. 1295-1322, Jun. 2007.[5] D. L. Donoho, A. Maleki, and A. Montanari, “Message passing algo-rithms for compressed sensing,”
Proceedings of the National Academy ofSciences , 2009.[6] S. Rangan, “Generalized approximate message passing for estimationwith random linear mixing,” in
Proc. IEEE ISIT , Aug. 2011.[7] C. Huang, L. Liu, C. Yuen, and S. Sun, “A LSE and sparse messagepassing-based channel estimation for mmWave MIMO systems,” in
Proc.IEEE GLOBECOM Workshops , Dec. 2016.[8] C. Fan, Y. Zhang, and X. Yuan, “Dynamic nested clustering for parallelPHY-layer processing in Cloud-RANs”,
IEEE Tran. Wireless Commun. ,vol. 15, no. 3, pp. 1881-1894, Mar. 2016.[9] L. Liu, C. Yuen, Y. L. Guan, Y. Li, and Y. Su, “A low-complexity Gaussianmessage passing iterative detection for massive MU-MIMO systems,” in
Proc. IEEE ICICS , Dec. 2015.[10] L. Liu, C. Yuen, Y. L. Guan, Y. Li, and Y. Su, “Convergence analysisand assurance Gaussian message passing iterative detection for massiveMU-MIMO systems,”
IEEE Trans. Wireless Commun. , vol. 15, no. 9, pp.6487-6501, Sept. 2016.[11] L. Liu, C. Yuen, Y. L. Guan, Y. Li and C. Huang, “Gaussian messagepassing iterative detection for MIMO-NOMA systems with massiveusers,” in
Proc. IEEE GLOBECOM , Dec. 2016.[12] C. Fan, Y. Zhang, and X. Yuan,“Scalable uplink processing via sparsemessage passing in C-RAN”, in
Proc. IEEE GLOBECOM , Dec. 2015[13] C. Fan, X. Yuan, and Y. Zhang,“Randomized Gaussian message passingfor scalable uplinke signal processing in C-RANs”, in
Proc. IEEE ICC ,May 2016.[14] X. Xu, X. Rao, and V. K. N. Lau, “Active user detection and channelestimation in uplink CRAN systems,” in
Proc. IEEE ICC,
Jun. 2015.[15] Z. Utkovski, O. Simeone, T. Dimitrova, and P. Popovski, “Randomaccess in C-RAN for user activity detection with limited-capactiy fron-thaul”,
IEEE Signal Process. Lett. , vol. 24, no.1, pp. 17-21, Jan. 2017.[16] T. J. Richardson, and R. L. Urbanke, “The capacity of low-density parity-check codes under message-passing decoding”,
IEEE Tran. Inf. Theroy ,vol. 47, no. 2, pp. 599-618, Feb. 2001.[17] S. S. Chen, D. L. Donoho, and M. A. Saunders,“Atomic decompositionby basis pursuit,”
SIAM J. Scientif. Comput. , vol. 20, no. 1, pp. 33-61, 1998.[18] D. Moltchanov, “Distance distributions in random networks,”