Comments on "Momentum fractional LMS for power signal parameter estimation"
Shujaat Khan, Imran Naseem, Alishba Sadiq, Jawwad Ahmad, Muhammad Moinuddin
CComments on “Momentum fractional LMS for powersignal parameter estimation”
Shujaat Khan a , Imran Naseem b,c, ∗ , Alishba Sadiq c , Jawwad Ahmad d ,Muhammad Moinuddin e,f a Department of Bio and Brain Engineering, Korea Advanced Institute of Science andTechnology (KAIST), Daejeon, Republic of Korea. b School of Electrical, Electronic and Computer Engineering, The University of WesternAustralia, 35 Stirling Highway, Crawley, Western Australia 6009, Australia. c College of Engineering, Karachi Institute of Economics and Technology, Korangi Creek,Karachi 75190, Pakistan. d Department of Electrical Engineering, Usman Institute of Technology (UIT), Karachi,Pakistan. e Center of Excellence in Intelligent Engineering Systems (CEIES), King AbdulazizUniversity, Jeddah, Saudi Arabia. f Electrical and Computer Engineering Department, King Abdulaziz University, Jeddah,Saudi Arabia.
Abstract
The purpose of this paper is to indicate that the recently proposed Momentumfractional least mean squares (mFLMS) algorithm has some serious flaws in itsdesign and analysis. Our apprehensions are based on the evidence we found inthe derivation and analysis in the paper titled: “
Momentum fractional LMS forpower signal parameter estimation ”. In addition to the theoretical bases ourclaims are also verified through extensive simulation results. The experimentsclearly show that the new method does not have any advantage over the classicalleast mean square (LMS) method.
Keywords:
Least mean squares algorithm, Fractional least mean squaresalgorithm, Momentum fractional least mean square algorithm.
1. Introduction
The least mean square (LMS) is one of the most widely used algorithmsin adaptive signal processing [1]. It has a number of variants to deal withvarious signals and environmental conditions [2, 3, 4, 5, 6, 7, 8, 9, 10, 11].To improve the convergence performance of the conventional LMS, differentmethods have been proposed based on adaptive step-size notion [12, 13, 14, 15]. ∗ Corresponding author
Email addresses: [email protected] (Shujaat Khan), [email protected] (Imran Naseem), [email protected] (Alishba Sadiq), [email protected] (JawwadAhmad), [email protected] (Muhammad Moinuddin)
Preprint submitted to Noname September 6, 2018 a r X i v : . [ m a t h . O C ] M a y mongst all the variants, an important modification is the one proposed byJ. G. Proakis and is called as the momentum LMS (mLMS) [16]. Unlike theconventional LMS where instantaneous gradient is used for weight update, inmLMS the momentum of the gradient change is used [17]. By incorporating themomentum of the gradient, the mLMS can achieve better convergence withoutcompromising the steady-state error[17, 18]. Another method proposed by Rajaet al [19], is the application of fractional calculus. Using the same method aseries of papers have been published [20, 21, 22]; claiming the improved steady-state and convergence performance. But all these variants [19, 22, 23] have beencriticized and are shown to offer no improvement over the conventional LMS[24, 25]. Recently, another method in the same direction is proposed named asmomentum FLMS (mFLMS) [26]. We argue that the proposed method doesnot [26] improve the performance of the conventional LMS and has serious flawsin its analysis presented in the paper. Our argument is supported by analyticalreasoning and extensive simulations.The organization of this paper is as follows: Flaws in the design, analysis andsimulation setup of the mFLMS paper are discussed in section 2. Estimationmodel, simulation setup and evaluation parameters are defined in section 3,followed by results and discussion in section 4. Finally, the paper is concludedin section 5.
2. Remarks on “Momentum fractional LMS for power signal param-eter estimation ”
This section focuses on the main flaws of the design of the mFLMS algo-rithm [26]. The structure of the mFLMS algorithm follows the architecture ofthe FLMS algorithm [19], thus the problems of the FLMS that are mentionedin [24], are inherited in the mFLMS algorithm as well. Let us first rewrite theexpressions presented in [26], they will be referred in the forthcoming mathe-matical analysis. The first equation is the weight update rule for the FLMSalgorithm presented in [26]: ˆw ( n + 1) = ˆw ( n ) + µ e ( n ) u ( n ) + µ f Γ (2 − f ) e ( n ) u ( n ) (cid:12) | ˆw | − f ( n ) . (12)where µ and µ f are the real positive values, defining the step-size. While f is the fractional power in the range 0 < f < Γ represents the Gammafunction. Here, u and ˆw are the vectors defining the input and the estimatedweights of the filter respectively. The error between target and estimated out-puts is e ( n ).The key set of equations derived in [26] are given as: ˆw ( n + 1) = ˆw ( n ) + v ( n + 1) , (13) v ( n + 1) = α v ( n ) + g ( n ) , (14) g ( n ) = µ e ( n ) u ( n ) + µ f Γ (2 − f ) e ( n ) u ( n ) (cid:12) | ˆw | − f ( n ) , (15)2here v ( n +1) is the velocity update term defined by the momentum, α ∈ (0 , g ( n ).According to [26], Eq. (12) is the weight update equation of the FractionalLMS algorithms [19], which uses absolute value of the weight vector primarilyto avoid complex values. However with this change, the fractional gradientterm will not be the actual gradient of the cost function. Indeed, to exploit thefractional gradient characteristics, the complex domain information needs to beprocessed accordingly and proper gradient information must be incorporated [5].With this change of absolute value of the weight vector, the FLMS algorithm hasshown to offer no improvement compared to the conventional method [24]. Sincethe mFLMS approach [26] also utilizes the absolute values of ˆw in equation (15)avoiding the complex mathematics, the same argument is valid for this case aswell. There are serious flaws in the analysis section of the mFLMS algorithm,which are listed below.1. By employing the assumption of µ f = µ Γ (2 − f ) , the authors constructthe equation (16) using equation (13) to (15), as: ˆw ( n + 1) = ˆw ( n ) + α [ ˆw ( n ) − ˆw ( n − µ e ( n ) u ( n ) (cid:12) | ˆw | − f ( n ) . (16)However, the solved equation should be ˆw ( n +1) = ˆw ( n )+ α [ ˆw ( n ) − ˆw ( n − µ e ( n ) (cid:2) u ( n ) + u ( n ) (cid:12) | ˆw ( n ) | − f (cid:3) .
2. Even if we consider the above mentioned flaw as a typo error, the solutionfrom (16) to (33) , are not consistent and cannot be proved under anycondition.3. The last term in Eq. (17) should have (cid:12) operator (element by elementoperation), as it was used in Eqs. (15) and (16) of the paper.4. The last term in Eq. (17) should not have one added to the last bracket,rather the equation should be ∆ ˆw ( n + 1) = ∆ ˆw ( n ) + α [ ∆ ˆw ( n ) − ∆ ˆw ( n − µ e ( n ) (cid:2) u ( n ) + u ( n ) (cid:12) | w opt + ∆ ˆw ( n ) | − f (cid:3) .
5. Eq. (19) is incorrect because it defines the binomial formula for vectors.The right hand side of the equation is however resulting in a scalar term(note that all the vectors are considered to be column vectors in thisanalysis).6. The result in Eq. (23) is technically incorrect, in general the expecta-tion of fractional power of any random variable cannot be replaced bythe expectation of its unit power and a linear operation [27]. Thus, theexpression in Eq. (23) is an approximation.3. The convergence analysis of the mFLMS algorithm is primarily dependenton the evaluation of the expectation terms of the form E [ ∆w γ ( n )] (where γ is a fractional number). However, the analysis provided in this paper doesnot provide its accurate evaluation. Instead, the authors have approxi-mated this fractional moment using some function G without providingits expression (see Eq. (23)). Thus, the whole convergence analysis isvague and the stability bound derived in Eq. (33) is meaningless withoutthe knowledge of the function G . In adaptive signal processing, performance comparison between algorithmscan be made on the basis of different criteria. Three important measures ofperformance are: (1) Convergence rate, (2) steady-state error and (3) compu-tational complexity. In [26, Sec. 3.1], it is already mentioned that the mFLMSalgorithm is computationally very expensive, we therefore focus only on conver-gence and steady-state measures for our experiments.We argue that in [26] the simulation parameters used for LMS algorithmare not appropriate. With the learning rate of 1 × − (as adopted in [26])the LMS algorihtm converges slowly, interestingly the graphs are shown for just4000 iterations (see [26, Fig. 2]). For a fair evaluation, both algorithms mustbe setup at either equal convergence (for steady-state performance comparison)or equal steady-state (for convergence performance). Also if one algorithm canperform better than the other in both aspects, then higher convergence rate atthe cost of low steady-state performance must be shown.
3. Experimental Setup
To re-evaluate the performance of the mFLMS and the conventional LMSalgorithm, we considered the same problem of power signal parameter estimationas used in [26].
In this section an overview of the power signal estimation model is provided.To estimate signal parameter, a sampled multi-harmonic sinusoidal signal y ,with different amplitudes and phases, is considered, i.e., y ( n ) = N (cid:88) k =1 a k sin( nω k + φ k ) + (cid:15) ( n ) , (1)where (cid:15) ( n ) is a Gaussian disturbance of zero mean and constant variance σ , a k , ω k and φ k are the amplitude, the angular frequency and the phase shifts ofthe sinusoid respectively.With the help of the trigonometric identities, Eq. (1) can be transformedinto: y ( n ) = N (cid:88) k =1 a k (sin nω k cos φ k + cos nω k sin φ k ) + (cid:15) ( n ) , (2)4ollowing the assumption made in [26], the frequencies ω k of the four sinusoidsare taken to be known, Now, Eq. (2) can be written as: y ( n ) = N (cid:88) k =1 b k sin nω k + c k cos nω k + (cid:15) ( n ) , (3)where b k = a k cos φ k and c k = a k sin φ k are the unknown parameters, a k and φ k can be obtained using the following relations: a k = (cid:113) b k + c k φ k = tan − c k b k . (4)The desired vector of parameters ` , and the corresponding input vector aredefined as: ` = [ b , c , b , c , · · · , b N , c N ] T ∈ IR , (5) ψ = [sin ω n, cos ω n, sin ω n, cos ω n, · · · , sin ω N n, cos ω N n ] T ∈ IR . (6)Finally the power signal parameter estimation model is given as: y ( n ) = ψ T ( n ) θ + (cid:15) ( n ) , (7)While applying the LMS and mFLMS algorithms, the unknown parameter θ will be treated as the weight vector and will be updated using the respectiveweight update rules. We consider a composite signal of sinusoidal with four different frequencies[28], i.e., y ( n ) = 1 . . n + 0 .
95) + 2 . . n + 0 . n + 0 .
76) + 2 . . n + 1 .
1) + (cid:15) ( n ) . (8)We assume that all four frequencies used in equation (8) are known. Thisassumption is the same made in [26]. The desired parameter set is defined as: θ = [ a , a , a , a , φ , φ , φ , φ ] T = [1 . , . , , . , . , . , . , . T . (9) The performance of both the LMS and the mFLMS algorithm is evalutedon two metrics: (1) mean squared error (MSE) and (2) the normalized weightdifferences (NWD) defined as a fitness function δ in [26]. The NWD is given as: NWD ( n ) = || ˆ θ ( n ) − θ |||| θ || , (10)5here θ and ˆ θ are the desired and approximated parameter vectors at the n thiteration. Another performance measure based on MSE is given as:MSE = 1 M M (cid:88) i =1 ( θ i − ˆ θ i ) , (11)where M is the length of the desired vector θ , and ˆ θ is the final estimated vectorof parameters.
4. Results and Discussion
The LMS and mFLMS [26] algorithm are compared for three noise levels,i.e., ( σ = 0 . σ = 0 .
6, and σ = 0 .
9) . Figs. 1, 2, and 3 show the learningcurves for mFLMS and LMS for respective noise levels. We setup the LMS andthe mFLMS algorithms for the same convergence performance, and comparedthe steady-state performance of the two. From the Figs. 1,2 and 3, it canbe seen that under all conditions the LMS algorithm performed better thanthe mFLMS. The experiments were repeated for 1000 independent rounds andmean results for every 100th iteration was reported. For each independentround, the weights were initialized using random values obtained from Gaussiandistribution of mean zero and variance of one. The performance of the mFLMSalgorithm is evaluated for three different values of momentum α , i.e., ( α =0 . , . , .
8) and for each value of the momentum, the algorithm is comparedfor three different fractional powers, i.e., 0 .
25, 0 .
5, and 0 .
75. We observedthat the effect of momentum in the mFLMS algorithm is equivalent to thatof the learning rate in the conventional LMS, i.e, the increase in convergencerate increases the steady-state error. The effect of fractional power f is alsoarguable, it is shown to degrade the steady-state performance without evenimproving the convergence rate. Normalized weight difference results for every100th iteration under all 27 scenarios (3 noise level, 3 different step-sizes, and 3different values of fractional power) are summarized in Tables 1, 2 and 3. Finalobtained values of estimation parameters and mean squared error (MSE) fornoise levels ( σ = 0 . σ = 0 .
6, and σ = 0 .
9) are reported in Table 4, 5 and 6respectively. 6 a) f = 0 . f = 0 . f = 0 . Figure 1: Comparison of steady state results for σ = 0 . a) f = 0 . f = 0 . f = 0 . Figure 2: Comparison of steady state results for σ = 0 . a) f = 0 . f = 0 . f = 0 . Figure 3: Comparison of steady state results for σ = 0 . able 1: Performance comparison based on fitness achieved at specific iterations for σ = 0 . Method α Fitness achieved at specific iterations
100 200 300 400 500 600 700 800 900 1000 mFLMS( f =0.25) 0.2 mFLMS( f =0.50) 0.2 mFLMS( f =0.75) 0.2 LMS( η =0.027) 0.2397 0.0619 0.0260 0.0222 0.0218 0.0222 0.0224 0.0217 0.0225 0.0221mFLMS( f =0.25) 0.5 mFLMS( f =0.50) 0.5 mFLMS( f =0.75) 0.5 LMS( η =0.042) 0.1041 0.0294 0.0281 0.0279 0.0281 0.0278 0.0281 0.0279 0.0281 0.0278mFLMS( f =0.25) 0.8 mFLMS( f =0.50) 0.8 mFLMS( f =0.75) 0.8 LMS( η =0.1) 0.0462 0.0466 0.0462 0.0462 0.0461 0.0464 0.0459 0.0460 0.0466 0.0464 Table 2: Performance comparison based on fitness achieved at specific iterations for σ = 0 . Method α Fitness achieved at specific iterations
100 200 300 400 500 600 700 800 900 1000 mFLMS( f =0.25) 0.2 mFLMS( f =0.50) 0.2 mFLMS( f =0.75) 0.2 LMS( η =0.027) 0.2428 0.0656 0.0338 0.0315 0.0320 0.0316 0.0313 0.0307 0.0309 0.0307mFLMS( f =0.25) 0.5 mFLMS( f =0.50) 0.5 mFLMS( f =0.75) 0.5 LMS( η =0.042) 0.1077 0.0402 0.0394 0.0400 0.0392 0.0402 0.0395 0.0398 0.0392 0.0396mFLMS( f =0.25) 0.8 mFLMS( f =0.50) 0.8 mFLMS( f =0.75) 0.8 LMS( η =0.1) 0.0646 0.0653 0.0647 0.0657 0.0664 0.0652 0.0649 0.0665 0.0656 0.0654 Table 3: Performance comparison based on fitness achieved at specific iterations for σ = 0 . Method α Fitness achieved at specific iterations
100 200 300 400 500 600 700 800 900 1000 mFLMS( f =0.25) 0.2 mFLMS( f =0.50) 0.2 mFLMS( f =0.75) 0.2 LMS( η =0.027) 0.2423 0.0693 0.0393 0.0378 0.0378 0.0380 0.0380 0.0386 0.0382 0.0378mFLMS( f =0.25) 0.5 mFLMS( f =0.50) 0.5 mFLMS( f =0.75) 0.5 LMS( η =0.042) 0.1110 0.0488 0.0483 0.0481 0.0483 0.0485 0.0480 0.0490 0.0489 0.0488mFLMS( f =0.25) 0.8 mFLMS( f =0.50) 0.8 mFLMS( f =0.75) 0.8 LMS( η =0.1) 0.0800 0.0794 0.0786 0.0801 0.0798 0.0799 0.0806 0.0797 0.0809 0.0794 able 4: Performance comparison based on estimation error for σ = 0 . Method α Adaptive parameters MSE θ θ θ θ θ θ θ θ mFLMS( f =0.25) 0.2 mFLMS( f =0.50) 0.2 mFLMS( f =0.75) 0.2 LMS( η =0.027) 1.8017 2.9048 3.9983 2.5009 0.9505 0.7992 0.7598 1.1008 6.41E-07mFLMS( f =0.25) 0.5 mFLMS( f =0.50) 0.5 mFLMS( f =0.75) 0.5 LMS( η =0.042) 1.8022 2.9034 3.9975 2.5013 0.9515 0.8013 0.7600 1.0999 1.90E-06mFLMS( f =0.25) 0.8 mFLMS( f =0.50) 0.8 mFLMS( f =0.75) 0.8 LMS( η =0.1) 1.8083 2.9036 4.0086 2.5127 0.9514 0.7982 0.7616 1.0994 1.51E-05True values 1.8 2.9 4 2.5 0.95 0.8 0.76 1.1 0 Table 5: Performance comparison based on estimation error for σ = 0 . Method α Adaptive parameters MSE θ θ θ θ θ θ θ θ mFLMS( f =0.25) 0.2 mFLMS( f =0.50) 0.2 mFLMS( f =0.75) 0.2 LMS( η =0.027) 1.8925 2.9328 3.7577 2.4026 0.9750 0.8300 0.7450 1.0707 9.21E-07mFLMS( f =0.25) 0.5 mFLMS( f =0.50) 0.5 mFLMS( f =0.75) 0.5 LMS( η =0.042) 1.8042 2.9021 3.9996 2.5007 0.9530 0.7994 0.7588 1.1003 4.20E-06mFLMS( f =0.25) 0.8 mFLMS( f =0.50) 0.8 mFLMS( f =0.75) 0.8 LMS( η =0.1) 1.8085 2.9238 3.9993 2.5115 0.9460 0.8009 0.7606 1.1027 3.58E-05True values 1.8 2.9 4 2.5 0.95 0.8 0.76 1.1 0 Table 6: Performance comparison based on estimation error for σ = 0 . Method α Adaptive parameters MSE θ θ θ θ θ θ θ θ mFLMS( f =0.25) 0.2 mFLMS( f =0.50) 0.2 mFLMS( f =0.75) 0.2 LMS( η =0.027) 1.8025 2.9009 3.9986 2.5028 0.9486 0.8002 0.7612 1.0983 2.93E-06mFLMS( f =0.25) 0.5 mFLMS( f =0.50) 0.5 mFLMS( f =0.75) 0.5 LMS( η =0.042) 1.8076 2.8993 3.9972 2.5052 0.9498 0.7997 0.7598 1.1035 4.02E-06mFLMS( f =0.25) 0.8 mFLMS( f =0.50) 0.8 mFLMS( f =0.75) 0.8 LMS( η =0.1) 1.8114 2.9164 4.0052 2.5240 0.9525 0.8016 0.7574 1.1014 3.32E-05True values 1.8 2.9 4 2.5 0.95 0.8 0.76 1.1 0 . Conclusion Recently, momemtum fractional LMS (mFLMS) algorithm has been pro-posed for the estimation of power signal parameter [26]. The algorithm is shownto outperform the conventional LMS algorithm. The mathematical assumptionsmade for a simplified derivation of the mFLMS algorithm are however invalid.In this research, we have highlighted the discrepancies in the mFLMS algorithmproposed in [26]. Extensive experiments have also been performed to thoroughlyinvestigate the merits of the mFLMS algorithm. After a careful analysis of theexperimental results, we conclude that, under no condition the mFLMS algo-rithm is better than the conventional LMS algorithm for power signal parameterestimation. In fact the LMS algorithm consistently performed much better thanthe mFLMS yielding better convergence and steady-state performance.
References [1] D. Paulo S. R, Adaptive Filtering: Algorithms and Practical Implementa-tion, Springer US, 2013.URL [2] N. V. Thakor, Y.-S. Zhu, Applications of adaptive filtering to ecg anal-ysis: noise cancellation and arrhythmia detection, IEEE Transactions onBiomedical Engineering 38 (8) (1991) 785–794.[3] J. M. G´orriz, J. Ram´ırez, S. Cruces-Alvarez, C. G. Puntonet, E. W. Lang,D. Erdogmus, A novel lms algorithm applied to adaptive noise cancellation,IEEE Signal Processing Letters 16 (1) (2009) 34–37.[4] S. C. Douglas, A family of normalized lms algorithms, IEEE signal pro-cessing letters 1 (3) (1994) 49–51.[5] B. Widrow, J. McCool, M. Ball, The complex lms algorithm, in: IEEEProceedings, Vol. 63, 1975, p. 719.[6] A. Khalili, A. Rastegarnia, S. Sanei, Quantized augmented complex least-mean square algorithm: Derivation and performance analysis, Signal Pro-cessing 121 (2016) 54–59.[7] B. Chen, S. Zhao, P. Zhu, J. C. Pr´ıncipe, Quantized kernel least meansquare algorithm, IEEE Transactions on Neural Networks and LearningSystems 23 (1) (2012) 22–32.[8] J. Benesty, S. L. Gay, An improved pnlms algorithm, in: Acoustics, Speech,and Signal Processing (ICASSP), 2002 IEEE International Conference on,Vol. 2, IEEE, 2002, pp. II–1881.[9] W. Liu, P. P. Pokharel, J. C. Principe, The kernel least-mean-square algo-rithm, IEEE Transactions on Signal Processing 56 (2) (2008) 543–554.1210] U. M. Al-Saggaf, M. Moinuddin, M. Arif, A. Zerguine, The q-least meansquares algorithm, Signal Processing 111 (2015) 50–60.[11] J. Ahmad, S. Khan, M. Usman, I. Naseem, M. Moinuddin, Fclms: Frac-tional complex lms algorithm for complex system identification, in: 13thIEEE Colloquium on Signal Processing and its Applications (CSPA 2017),IEEE, 2017.[12] R. H. Kwong, E. W. Johnston, A variable step size LMS algorithm, IEEETransactions on Signal Processing 40 (7) (1992) 1633–1642. doi:10.1109/78.143435 .[13] V. J. Methews, Z. Xie, A stochastic gradient adaptive filter with gradientadaptive step size 41 (1993) 2075–2087.[14] T. Aboulnasr, K. Mayyas, A robust variable step-size lms-type algorithm:analysis and simulations, IEEE Transactions on signal processing 45 (3)(1997) 631–639.[15] A. I. Sulyman, A. Zerguine, Convergence and steady-state analysis of avariable step-size NLMS algorithm, Signal Processing 83 (6) (2003) 1255–1273.[16] J. Proakis, Channel identification for high speed digital communications,IEEE Transactions on Automatic Control 19 (6) (1974) 916–922.[17] J. J. Shynk, S. Roy, The lms algorithm with momentum updating, in:Circuits and Systems, 1988., IEEE International Symposium on, IEEE,1988, pp. 2651–2654.[18] R. Sharma, W. A. Sethares, J. A. Bucklew, Analysis of momentum adaptivefiltering algorithms, IEEE Transactions on Signal Processing 46 (5) (1998)1430–1434.[19] R. M. A. Zahoor, I. M. Qureshi, A modified least mean square algorithmusing fractional derivative and its application to system identification, Eu-ropean Journal of Scientific Research 35 (1) (2009) 14–21.[20] S. K. Dubey, N. K. Rout, Flms algorithm for acoustic echo cancellation andits comparison with lms, in: Recent Advances in Information Technology(RAIT), 2012 1st International Conference on, IEEE, 2012, pp. 852–856.[21] B. Shoaib, I. M. Qureshi, S. , I. , Adaptive step-size modified fractional leastmean square algorithm for chaotic time series prediction, Chinese PhysicsB 23 (5) (2014) 050503.[22] M. A. Z. Raja, N. I. Chaudhary, Two-stage fractional least mean squareidentification algorithm for parameter estimation of carma systems, SignalProcessing 107 (Supplement C) (2015) 327 – 339, special Issue on ad hocmicrophone arrays and wireless acoustic sensor networks Special Issue on13ractional Signal Processing and Applications. doi:https://doi.org/10.1016/j.sigpro.2014.06.015 .[23] B. Shoaib, I. M. Qureshi, I. , S. , A modified fractional least mean squarealgorithm for chaotic and nonstationary time series prediction, ChinesePhysics B 23 (3) (2014) 030502.[24] N. J. Bershad, F. Wen, H. C. So, Comments on fractional lms algorithm,Signal Processing 133 (2017) 219–226.[25] M. S. Aslam, Comments on “Two-stage fractional least mean squareidentification algorithm for parameter estimation of CARMA systems”,Signal Processing 117 (Supplement C) (2015) 279 – 280. doi:https://doi.org/10.1016/j.sigpro.2015.06.001 .[26] S. Zubair, N. I. Chaudhary, Z. A. Khan, W. Wang, Momentum fractionallms for power signal parameter estimation, Signal Processing 142 (2018)441–449.[27] T. Y. Al-Naffouri, M. Moinuddin, N. Ajeeb, B. Hassibi, A. L. Mous-takas, On the distribution of indefinite quadratic forms in gaussian randomvariables, IEEE Transactions on Communications 64 (1) (2016) 153–165. doi:10.1109/TCOMM.2015.2496592doi:10.1109/TCOMM.2015.2496592