Modified Recursive Cholesky (Rchol) Algorithm: An Explicit Estimation and Pseudo-inverse of Correlation Matrices
MModified Recursive Cholesky (Rchol) Algorithm
An Explicit Estimation and Pseudo-inverse of CorrelationMatrices
Vanita Pawar vanita [email protected]
Krishna Naik Karamtot [email protected]
Abstract —The Cholesky decomposition plays an important rolein finding the inverse of the correlation matrices. As it is a fastand numerically stable for linear system solving, inversion, andfactorization compared to singular valued decomposition (SVD),QR factorization and LU decomposition. As different methodsexist to find the Cholesky decomposition of a given matrix,this paper presents the comparative study of a proposed RCholalgorithm with the conventional methods. The RChol algorithmis an explicit way to estimate the modified Cholesky factors of adynamic correlation matrix.
Cholesky decomposition is a fast and numerically stable forlinear system solving, inversion, and factorization compared tosingular valued decomposition (SVD), QR factorization andLU decomposition [1] . The wireless communication system ishighly dependent on matrix inversion of the correlation matrix.Such system consists of a huge matrix inversion. An outdoorwireless communication has a time-varying channel whichchanges dynamically for mobile user. In case of narrowbandchannel, the channel is considered constant for a symbolduration, whereas for broadband, it is changing within asymbol period. Such time-varying channel forms the specialstructure of channel matrix and correlation matrix. To exploitsuch special structure, a novel modified recessive Cholesky(RChol) algorithm is introduced in [2] . Our proposed (RChol)algorithm is a computational efficient algorithm to compute themodified Cholesky factors of known as well as an unknowncovariance matrix.In this paper, we present the comparative study of conven-tional Cholesky algorithm and the RChol algorithm to manifestthe importance of the proposed algorithm in a highly dynamicwireless communication.I. S
YSTEM M ODEL
In wireless communication system, number of transmitand or received antennas are used to improve the diversityof the system. The channel h between transmitter and re-ceiver has the different form and depends on the numberof antennas used at the transmitter and the receiver side.The channel for Single-input-single-output (SISO) as h = { h n , h n , . . . h L − n } , for Single-input-multiple-output (SIMO)as h = { h n , h n , . . . h L − n } and for Multiple-input-multiple-output (MIMO): H = { H n , H n , . . . H L − n } . Let y (n) be received signal with the number of transmitantennas (cid:48) K = 1 (cid:48) , multipath L − and channel noise v ,represented as y ( n ) := K (cid:88) k =1 L − (cid:88) l =0 h k ( n ; l ) s k ( n − l ) + v ( n ) , n = 0 , , ..T − (1)Let y N ( n ) be the received vector by stacking (cid:48) N (cid:48) suc-cessive received vectors. Where y N ( n ) = [ y ( n ) , y ( n − , . . . y ( n − N + 1)] T and the transmitted symbol vector is s N = [ s ( n ) , s ( n − , .s ( n − N + 1)] T . Then y N ( n ) can berepresented in matrix form as y N ( n ) = H N s N ( n ) + v N ( n ) and the correlation matrix for y N can be written as R N ( n ) = E [ y N ( n ) y HN ( n )] . Let r n = E [ y ( n ) y H ( n )] and r nij = E [ y ( n − i ) y H ( n − j )] then the correlation matrix R N ( n ) and R N ( n − at time instant (cid:48) n (cid:48) and (cid:48) n − (cid:48) can be representedas equation (2) and equation (3) respectively. r n r n . . . . . . r n N − r n N − r n r n . . . . . . r n N − r n N − ... ... ... ... ... r n ( N − r n ( N − . . . . . . r n ( N − N − r n ( N − N − (2) r n r n . . . . . . r n N − r n N ... ... ... ... ... r n ( N − r n ( N − . . . . . . r n ( N − N − r n ( N − N r nN r nN . . . . . . r nN ( N − r nNN (3)II. C HOLESKY D ECOMPOSITION
The correlation matrix is complex matrix and the pseudo-inverse of R can be computed from Cholesky factors, suchthat if lower triangular matrix L is Cholesky factors of thecorrelation matrix R and can be represented as R = LL H thenpseudo-inverse of R can be computed as ˆ R † := L − H L − . Thesection below details the conventional Cholesky algorithmsand the RChol algorithm. A. Cholesky Decomposition (Gaxpy version)
The Cholesky Decomposition [3] factorizes a complex (orreal-valued) positive-denite Hermitian symmetric matrix intoa product of a lower triangular matrix and its Hermitiantranspose. R = LL H where, L is a lower triangular matrixand L H is Hermitian of L . The matrix R must be a positivedefinite and this method needs square root operation. a r X i v : . [ m a t h . A C ] M a r ) Algorithm steps:
1) Compute R at each time instant n2) Find the square root of diagonal element of R
3) Modify each column of R
4) Equate lower triangular part of R to L
5) Repeat steps (1) to (4) for each time instant Algorithm 1
Cholesky Decomposition R = LL H Initialization: [ R ]1: N, R ]1: N, (cid:113) [ R ]1 , Order Updates on R (cid:48) s : for k = 2 to N [ R ] k : N,k = [ R ] k : N,k − [ R ] k : N, k − R ] Hk, k − R ] k : N,k = [ R ] k : N,k (cid:113) [ R ] k,k end L = tril ( R ) B. Modified Cholesky Algorithm R = LDL H To avoid square root operation, a modified Cholesky al-gorithm [3] is used, which avoids square root operation byintroducing a diagonal matrix D in between Cholesky factors.The modified Cholesky algorithm does not require R to be apositive definite matrix but it’s determinant must be nonzero. R may be rank deficient to a certain degree i.e. D may containnegative main diagonal entries if R is not positive semi-definite.
1) Algorithm steps:
1) Compute R at each time instant n2) Modify each column of R
3) Equate the strictly lower part of matrix R to L withones on the main diagonal4) Equate main diagonal of R with the main diagonal of D
5) Repeat step (1) to (4) for each time instant. Algorithm 2
Modified Cholesky Decomposition R = LDL H Initialization: [ R ]2: N, R ]2: N, R ]1 , Order Updates on R (cid:48) s : for k = 2 to N for i=1:k-1 [ v ] i = [ R ]1 ,k , if i = 1[ R ] i,i [ R ] ∗ n,i , if i (cid:54) = 1 end [ v ] k = [ R ] k,k − [ R ] k, k − v ]1: k − R ] k,k = [ v ] k [ R ] k +1: N,k = [ R ] k +1: N,k − [ R ] k +1: N, k − v ]1: k − v ] k end D = diag ( daig ( R )) L = tril ( R ) C. Recursive Cholesky Algorithm (The Shcur Algorithm) R Schur := HH H The Schur algorithm recursively compute the columns ofthe lower triangular matrix H form matrix R . It is shown in [4] that Levinson recursion may be used to derive the Latticerecursion for computing QR factors of data matrices andLattice recursion can be used to derive the Schur recursion forcomputing Cholesky factors of a Toeplitz correlation matrix.The detail algorithm is given in algorithm . The Schuralgorithm like previously mentioned algorithm computes all N inner product to compute matrix R for initialization.
1) Algorithm steps:
1) Compute R at each time instant n2) Initialize first column of R to the first column ofCholesky factor H
3) Compute rest column recursively from columns of R
4) repeat step (1) to (3) for each time instant Algorithm 3
Schur Algorithm R = LL H Initialization: for k = 1 H n ) = [ rn , rn , .....rn ( N − T ˜ H n ) = [0 , rn , .....rn ( N − T Order Updates on H (cid:48) s : for k = 2 to NσkHk = ˜ krefk [ krefk ( σk − Hk −
1) + ZM ( σk − Hk − σk ˜ Hk = ˜ krefk [( σn ˜ Hk ) + krefk ZM ( σkHk )] Scaling Factors: krefk = − ( σk ˜ Hk ) k ( σkHk ) k − krefk = ( σkHk ) k − σk +1 Hk +1) k ∗ Note: Here notation is followed same as in [4] and H represents vector
D. The RChol Algorithm ˆ L ˆ D ˆ L H := R It is clear from above equation (2) and equation (3) that R N ( n ) can be represented from submatrix of R N ( n − . Toutilize such special structure of correlation matrices, we pro-pose a modified recursive Cholesky algorithm to compute theCholesky factors recursively. This algorithm is modification ofSchur algorithm mentioned above. The more general approachconsists of using the Schur algorithm to induce recursion forcolumns of dynamic L . This algorithm does not need N innerproducts to compute the correlation matrix R . The Choleskyfactors are computed explicitly such that Let L = LD / thenpseudo-inverse can be computed as ˆ R † = L − H L −
1) Algorithm steps:
1) Initialize first the first column of Cholesky factor A as A
2) Compute second column recursively from A ( n ) and A ( n −
3) Substitute sub-matrix A N − , N − ( n − to A N, N ( n )
4) Repeat step (1) to (3) for each time instantIn the Schur algorithm, columns of Cholesky factors attime instant n are computed recursively from the correlationmatrix at that instant. Whereas in the RChol algorithm first twocolumns of Cholesky factors at time instant n is computedrecursively from previous Cholesky factor and submatrix ofthat Cholesky factors are updated recursively from previousholesky factor i.e. at time instant n − . ConventionalCholesky algorithm mentioned here are introduced for normalmatrices whereas proposed matrix is well suited for blockmatrices and simulations are shown for that only. Algorithm 4
Recursive Cholesky Update : RChol R = LDL H Initialization: for k = 1 , D n ) = r n A n ) = [ r n , r n , . . . r n ( N − T ˜A n ) = [ , r n , . . . r n ( N − T Order Updates on A (cid:48) s : for k = 2 A k ( n ) := Z M A k − n − − ˜ A k − n ) ˜kref ( n ) D k ( n ) = D k ( n − I M − k ref ( n ) ˜k ref ( n )] for k > , ˜ A k − n ) = k ( n ) = ZMA k ( n − D k ( n ) = ZMD k ( n − Scaling Factors: k ref ( n ) = ˜ A n )(2 , :) A n − , :) ˜k ref ( n ) = k ref ( n ) (cid:48) D n − D n ) III. S
IMULATION RESULTS
To compare proposed the RChol algorithm with Schuralgorithm, we compared the result of both the algorithm withtheoretical results. Fig. . Show the ratio and difference ofmatrices ˆ R N , ˆ R RChol and ˆ R Schur , when the correlationmatrix is unknown. That has the application in blind channeland or data estimation. Fig. 1 (a) and (b) shows the maximumerror for the RChol algorithm, [ ˆ R N − ˆ R RChol ] is . while forthe Schur algorithm, [ ˆ R N − ˆ R Schur ] is i.e. nearly 6 timesthe RChol algorithm. In case of ratio Fig. 1 (a) and (b) showsthe maximum ratio for the RChol algorithm, [ ˆ R N ./ ˆ R RChol ] is while for the Schur algorithm, [ ˆ R N ./ ˆ R Schur ] is . (a) -0.6-0.4-0.200.20.40.6 (b) -4-3-2-101 (c) -5051015202530354045 (d) (e) -2.5-2-1.5-1-0.50 (f) (g) (h) Fig. 1: Comparisons of RChol algorithm Vs Schur Algorithmfor the unknown and known correlation matrix R , (a, e):Proposed Algorithm ( Difference), (b, f): Schur Algorithm(Difference), (c, g):Proposed Algorithm (Ratio), (d, h): SchurAlgorithm (Ratio)Fig. 1 Show the ratio and difference of matrices ˆ R N , ˆ R RChol and ˆ R Schur , when the correlation matrix is known.Fig. 1 (a) and (b) shows that the maximum error for theRChol algorithm, [ ˆ R N − ˆ R RChol ] is . while for the Schur algorithm, [ ˆ R N − ˆ R Schur ] is . i.e. nearly 6 times the RCholalgorithm. In case of ratio Fig. 1 (e) and (f) shows that themaximum ratio for the RChol algorithm, [ ˆ R N ./ ˆ R RChol ] is . while for the Schur algorithm, [ ˆ R N ./ ˆ R Schur ] is .From Fig. 1 it can be concluded that the Schur algorithm isbest suited when the correlation matrix is known, but leadsto huge error propagation through the column when R isunknown and cannot be applied for blind channel estimation.In converse, the RChol algorithm is best suited for blindchannel estimation and reduces error propagation through thecolumn. IV. C ONCLUSION
Convention methods of Cholesky factorization requires thecorrelation matrix which needs inner product. While the re-cursive modied Cholesky algorithm (RChol) algorithm is anexplicit way to recursively calculating the pseudo-inverse ofthe matrices without estimating the correlation matrix. It re-quires less number of iteration which avoids error propagationthrough column updates. The RChol algorithm has most of theuse in calculating the pseudo-inverse of the of a time-varyingmatrix which is applicable to SIMO/MIMO, CDMA, OFDM,etc. wireless communication systems.V. Pawar and K. Naik (
DIAT, Pune, India )E-mail: [email protected]
EFERENCES[1] G. Golub and C. Van Loan: ’Matrix computations’, 2012[2] V. Pawar and K. Krishna Naik: ’Blind multipath time varying channelestimation using recursive Cholesky update’,
AEU - Int. J. Electron.Commun. , 2016,
0, no. 1, pp. 113-119[3] R. Hunger and T. Report: ’Floating Point Operations in Matrix-VectorCalculus’, Matrix, 2007.[4] C. P. Rialan and L. L. Scharf: ’Fast algorithms for computing QR andCholesky factors ofToeplitz operators’,
IEEE Trans. Acoust. , 19983