Least Mean Square/Fourth Algorithm with Application to Sparse Channel Estimation
Least Mean Square/Fourth Algorithm with Application to Sparse Channel Estimation
Guan Gui, Abolfazl Mehbodniya and Fumiyuki Adachi Department of Communication Engineering Graduate School of Engineering, Tohoku University Sendai, Japan {gui, mehbod }@mobile.ecei.tohoku.ac.jp, [email protected]
Abstract —Broadband signal transmission over frequency-selective fading channel often requires accurate channel state information at receiver. One of the most attracting adaptive channel estimation methods is least mean square (LMS) algorithm. However, LMS-based method is often degraded by random scaling of input training signal. To improve the estimation performance, in this paper we apply the standard least mean square/fourth (LMS/F) algorithm to adaptive channel estimation (ACE). Since the broadband channel is often described by sparse channel model, such sparsity could be exploited as prior information. First, we propose an adaptive sparse channel estimation (ASCE) method using zero-attracting LMS/F (ZA-LMS/F) algorithm. To exploit the sparsity effectively, an improved channel estimation method is also proposed, using reweighted zero-attracting LMS/F (RZA-LMS/F) algorithm. We explain the reason why sparse LMS/F algorithms using -norm sparse constraint function can improve the estimation performance by virtual of geometrical interpretation. In addition, for different channel sparsity, we propose a Monte Carlo method to select a regularization parameter for RA-LMS/F and RZA-LMS/F to achieve approximate optimal estimation performance. Finally, simulation results show that the proposed ASCE methods achieve better estimation performance than the conventional one.
Keywords—least mean square fourth (LMS/F), adaptive sparse channel estimation (ASCE), zero-zttracting LMS/F (ZA-LMS/F), re-weighted zero-attracting LMS/F (RZA-LMS/F). I. I NTRODUCTION
Broadband signal transmission is becoming one of the mainstream techniques in the next generation communication systems. Due to the fact that frequency-selective channel fading is unavoidable, accurate channel state information (CSI) is necessary at the receiver for coherent detection [1]. One of effective approaches is adopting adaptive channel estimation (ACE). A typical framework of ACE is shown in Fig. 1. It is well known that ACE using least mean fourth (LMF) algorithm outperforms the least mean square (LMS) algorithm in achieving a better balance between convergence and steady-state performances. Unfortunately, standard LMF algorithm is unstable due to the fact that its stability depends on the following three factors: input signal power, noise power and weight initialization [2]. To fully benefit from the obvious merits of LMS and LMF, it is logical to combine the two algorithms for ACE purposes. The combined LMS/F algorithm was first proposed by Lim and Harris [20], as a method to improve the performance of the LMS adaptive filter without sacrificing the simplicity and stability properties of LMS. Recently, many channel measurement experiments have verified that broadband channels often exhibit sparse structure. A typical example of sparse system is shown in Fig. 2, where the length of FIR is while number of dominant coefficients is . In other words, sparse channel is consisted of a very few channel coefficients and most of them are zeros. Unfortunately, ACE using LMS/F algorithm always neglects the inherent sparse structure information and it may degrade the estimation performance. In this paper, we propose sparse LMS/F algorithms with application to ASCE. Inspired by least absolute shrinkage and selection operator (LASSO) algorithm [3], to exploit channel sparsity, -norm sparse constraint function is utilized in ASCE. Similar to sparse LMS algorithms, two sparse LMS/F algorithms are termed as zero-attracting LMS/F (ZA-LMS/F) and reweighted zero-attracting LMS/F (RZA-LMS/F), respectively.
Fig. 1. ASCE for broadband communication systems. he main contribution of this paper is proposing the sparse LMS/F algorithms with application to ASCE. Sparse penalized cost functions are constructed for implementing the sparse LMS/F algorithms. Two experiments are demonstrated to confirm the effectiveness of our propose methods. In the first experiment, the average mean square deviation (MSE) performance of sparse LMS/F algorithms is evaluated according to different number of nonzero coefficients. In the second experiment, the MSE performance of propose algorithms is evaluated in different reweighted factors. The remainder of this paper is organized as follows. A system model is described and standard LMS/F algorithm is introduced in Section II. In section III, sparse ASCE using ZA-LMS/F algorithm is introduced and improved ACSE using RZA-LMS/F algorithm is highlighted. Computer simulations are presented in Section IV in order to evaluate and compare performances of the proposed ASCE methods. Finally, we conclude the paper in Section V. II. S TANDARD
LMS/F A LGORITHM
Consider a baseband frequency-selective fading wireless communication system where FIR sparse channel vector [ ] is -length and it is supported only by nonzero channel taps. Assume that an input training signal is used to probe the unknown sparse channel. At the receiver side, observed signal is given by where [ ] denotes the vector of training signal , and is the additive white Gaussian noise (AWGN) assumed to be independent with . The objective of ASCE is to adaptively estimate the unknown sparse channel estimator using the training signal and the observed signal . According to [8], we can apply standard LMS/F algorithm to adaptive channel estimation, with the cost function where is a positive threshold parameter which controls the computational complexity and stability of LMS/F algorithm. With respect to Eq. (2), the corresponding updating equation of LMS/F algorithm is given by when , LMS/F algorithm in Eq. (3) behaves like the LMF with a step size of ; and when , it reduces to the standard LMS algorithm with a step size of . According to the analysis, it is necessary to choose a proper parameter to balance between instability and estimation performance of LMS/F algorithm. Assume the -th received error as , threshold parameter controls the variable step-size as shown in Fig. 3. If we fix the , then smaller parameter achieves smaller step-size which ensures LMS/F more stable and better estimation but at the cost of higher computational complexity, and vice versa. III. S PARSE
LMS/F A LGORITHMS A. ASCE using ZA-LMS/F algorithm
Recall that the adaptive channel estimation method uses standard LMS/F algorithm in Eq. (2), however, the proposed method does not take advantage of the channel sparsity. This is due to the original cost function in (2) which does not utilize the sparse constraint or penalty function. To exploit the sparsity, we introduce -norm sparse constraint to the cost function in (2) and obtain a new cost function according to ‖ ‖
Fig. 2. A typical example of sparse multipath channel.
Fig.3. Threshold parameter ( 𝜆 controls the variable step-size of LMS/F algorithm. here denotes a regularization parameter which balances the error term, i.e., , and sparsity of . To better understand the difference between (2) and (4), geometrical interpretation is shown in Fig. 4. Cost function in (2) cannot find sparse solution (convex point) in solution plane. Unlike (2), cost function in (4) can find sparse in solution plane due to its sparse constraint. Hence, the update equation of ZA-LMS/F algorithm is given by ( ) where and denotes the sign function which is generated from ( ) ‖ ‖ { where [ ] and . It is well known that ZA-LMS/F uses -norm constraint to approximate the optimal sparse channel estimation [9]. Fig. 4. Sparse channel estimation with -norm sparse constraint. B. Improved ASCE method using RZA-LMS/F algorithm
The ZA-LMS/F cannot distinguish between zero taps and non-zero taps since all the taps are forced to zero uniformly as show in Fig. 5. Unfortunately, ZA-LMS/F based approach will degrade the estimation performance. Motivated by reweighted -minimization sparse recovery algorithm [10] in CS [11], [12], we proposed an improved ASCE method using RZA-LMS/F algorithm. The cost function of this method is constructed by ∑ | | where is a regularization parameter which trades off the estimation error and channel sparsity. The corresponding update equation is ( ) | | where is a parameter which depends on step-size , regularization parameter and threshold . In the second term of (11), smaller than ⁄ channel coefficients | | are replaced by zeros in high probability. C. Regularization parameter selection for sparse LMS/F algorithms
It is well known that regularization parameter is very important for LASSO based sparse channel estimation [13]. In [14], a parameter selection methods was proposed for LASSO based partial sparse channel estimation. To the best of our knowledge, however, there is no paper report on regularization parameter selection method for ASCE. Here, we propose an approximate optimal selection method by Monte Carlo simulation which adopts 1000 runs for achieving average performance. Parameters for computer simulation are given in Tab. I. The estimation performance is evaluated by average mean square error (MSE) which is defined by ‖ ‖ where denotes the expectation operator, and are the actual channel vector and its -th iterative adaptive channel estimator, respectively. T AB . I. S IMULATION PARAMETERS . parameters values channel length no. of nonzero coefficients and step-size threshold re-weighted factor for RZA-LMS/F Utilizing different regularization parameters, performance curves of ZA-LMS/F and RZA-LMS/F are depicted in Fig. 5 and Fig. 6, respectively. In Fig. 5, it is easy to find that MSE performance is near optimal when regularization parameters are selected as and for and , respectively. Likewise, in Fig. 6, choosing approximate optimal regularization parameters and for RZA-LMS/F can achieve near optimal estimation performance when and , respectively. Hence, these parameters will be utilized for performance comparison with sparse LMS algorithms. solution plane -norm sparse constraintwithout sparse constraint
Fig.5. ZA-LMS/F based sparse channel estimation performance depends on regularization parameter . Fig.6. RZA-LMS/F based sparse channel estimation performance depends on regularization parameter . IV. C OMPUTER S IMULATIONS
In this section, the proposed ASCE methods using (R)ZA-LMS/F algorithm is evaluated. To obtain the average performance, 1000 independent Monte-Carlo runs are adopted. The length of channel vector is set as and its number of dominant taps is set to and , respectively. Each dominant channel tap follows random Gaussian distribution as and their positions are randomly allocated within the length of which is subject to || || . The received signal-to-noise ratio (SNR) is defined as ⁄ , where is the unit transmission power. Here, we set the SNR as in computer simulation. All of the step sizes and regularization parameters are listed in Tab. II. T AB . II. S IMULATION PARAMETERS . parameters values channel length no. of nonzero coefficients and distribution of nonzero coefficient random Gaussian threshold parameter for LMS/F-type SNR 10dB step-size regularization parameter for and and regularization parameter for and and re-weighted factor for RZA-LMS(/F)
Fig. 7. Performance comparison at . Fig. 8. Performance comparison at . -3 -2 -1 Iterative times A v e r a g e M S E ZA =1e-3 ZA =8e-4 ZA =6e-4 ZA =4e-4 ZA =2e-4 ZA =1e-4 ZA =8e-5 ZA =6e-5 ZA =4e-5 ZA =2e-5 ZA-LMS/FSNR=10dB =0.04 =0.8 K=2 K=4 -4 -3 -2 -1 Iterative times A v e r a g e M S E =0.2 =0.1 =0.08 =0.06 =0.04 =0.02 =0.01 =0.008 =0.006 =0.004 K=4K=2RZA-LMS/FSNR=10dB =20 =0.04 =0.8 -4 -3 -2 -1 Iterative times A v e r a g e M S E LMSZA-LMSRZA-LMSLMS/FZA-LMS/FRZA-LMS/FSNR=10dBK=2 -4 -3 -2 -1 Iterative times A v e r a g e M S E LMSZA-LMSRZA-LMSLMS/FZA-LMS/FRZA-LMS/FSNR=10dBK=4 n the first experiment, average MSE performance of proposed methods is evaluated for and . To confirm the effectiveness of the proposed methods, we compare them with sparse LMS algorithms, i.e., ZA-LMS and RZA-LMS [15]. For a fair comparison of our proposed methods with sparse LMS methods, we utilize the same step-size, i.e., . In addition, to achieve approximate optimal sparse estimation performance, regularization parameters for two sparse LMS algorithms are adopted from the paper [16], i.e., and for ; and for
4. Average MSE performance comparison curves are depicted in Fig. 7 and Fig. 8, respectively. Obviously, LMS/F-type methods achieves better estimation performance than LMS-type ones in [15]. According to the two figures, sparse LMS/F algorithms, i.e., ZA-LMS/F and RZA-LMS/F, achieve better estimation performance than LMS/F due to the fact that sparse LMS/F algorithms utilize -norm sparse constraint function. In the second experiment, as shown in Fig. 9, estimation performance curves of RZA-LMS/F, utilizing different reweighted factors are depicted for and . If we set the numerical values for parameters similar to Tab. II, when , RZA-LMS/F using or can achieve approximate optimal estimation performance. Fig. 9 shows that RZA-LMS/F algorithm depends on reweighted factor. Hence, the proper selection of the reweighted factor is also important when applying the RZA-LMS/F algorithm in adaptive sparse channel estimation. Fig. 9. RZA-LMS/F based sparse channel estimation performance depends on re-weighted factor . V. C ONCLSION
In this paper, a novel LMS/F algorithm was applied in ASCE. Based on the CS theory, we first proposed a novel ASCE method using ZA-LMS/F algorithm. Inspired by re-weighted -norm algorithm in CS, an improved ASCE method using RZA-LMS/F algorithm was proposed. By Monte Carlo simulation, we proposed a simple method for choosing the approximate optimal regularization parameter for sparse LMS/F algorithm, i.e., ZA-LMS/F and RZA-LMS/F. Simulation results showed that the proposed ASCE methods using ZA-LMS/F and RZA-LMS/F algorithms achieve better performance than any sparse LMS methods. A
CKNOWLEDGMENT
This work was supported in part by the Japan Society for the Promotion of Science (JSPS) postdoctoral fellowship and the National Natural Science Foundation of China under Grant 61261048. R
EFERENCES [1]
F. Adachi, H. Tomeba, K. Takeda, and S. Members, “Introduction of Frequency-Domain Signal Processing to Broadband Single-Carrier Transmissions in a Wireless Channel,”
IEICE Transactions on Communicationns , vol. E92-B, no. 9, pp. 2789–2808, 2009. [2]
E. Walach and B. Widrow, “The least mean fourth (LMF) adaptive algorithm and its Family,”
IEEE Transactions on Information Theory , vol. 30, no. 2, pp. 275–283, 1984. [3]
R. Tibshirani, “Regression Shrinkage and Selection via the Lasso,”
Journal of the Royal Statistical Society (B) , vol. 58, no. 1, pp. 267–288, 1996. [4]
G. Gui and F. Adachi, “Sparse LMF algorithm for adaptive channel estimation in low SNR region,”
International Journal of Communication Systems, to appear , 2013. [5]
E. Eweda, “Global stabilization of the least mean fourth algorithm,”
IEEE Transactions on Signal Processing , vol. 60, no. 3, pp. 1473–1477, Mar. 2012. [6]
G. Gui and F. Adachi, “Sparse least mean forth filter with zero-attracting,”
Submitted for IEICE Electronics Express , pp. 1–6, 2013. [7]
E. J. Candès, M. B. Wakin, and S. P. Boyd, “Enhancing Sparsity by Reweighted L1 Minimization,”
Journal of Fourier Analysis and Applications , vol. 14, no. 5–6, pp. 877–905, Oct. 2008. [8]
S. Lim, “Combined LMS/F algorithm,”
Electroincs Letters , vol. 33, no. 6, pp. 467–468, 1997. [9]
D. L. Donoho and Y. Tsaig, “Fast Solution of L1-Norm Minimization Problems When the Solution May Be Sparse,”
IEEE Transactions on Information Theory , vol. 54, no. 11, pp. 4789–4812, 2008. [10]
E. J. Candes, M. B. Wakin, and S. P. Boyd, “Enhancing Sparsity by Reweighted l1 Minimization,”
Journal of Fourier Analysis Applications , vol. 14, no. 5–6, pp. 877–905, 2008. [11]
D. L. Donoho, “Compressed Sensing,”
IEEE Transactions on Information Theory , vol. 52, no. 4, pp. 1289–1306, 2006. [12]
E. J. Candes, J. Romberg, and T. Tao, “Robust Uncertainty Principles : Exact Signal Reconstruction From Highly Incomplete Frequency Information,”
IEEE Transctions on Information Theory , vol. 52, no. 2, pp. 489–509, 2006. [13]
R. Tibshirani, “Regression shrinkage and selection via the lasso,”
Journal of the Royal Statistical Society, Series B , vol. 58, no. 1, pp. 267–288, 1996. [14]
G. Gui, Q. Wan, A. M. Huang, and C. G. Jiang, “Partial Sparse Multi-path Channel Estimation using l1-regularized LS Algorithm,” in
IEEE TENCON2008 , 2008, pp. 2–5. [15]
Y. Chen, Y. Gu, and A. O. Hero III, “Sparse LMS for System Identification,” in
IEEE International Conference on Acoustics, Speech and Signal Processing , 2009, no. 3, pp. 3125–3128. [16]
G. Gui, A. Mehbodniya, and F. Adachi, “Regularization selection methods for LMS-Type sparse multipath channel estimation,” in submitted for The 19th Asia-Pacific Conference on Communications (APCC 2013) , 2013. -4 -3 -2 -1 Iterative times A v e r a g e M S E =1 =2 =5 =10 =15 =20 =25 =30 =35 =40 =45 =50K=2 K=4 SNR=10dB f =0.04 RZA =0.04