Modified Basis Pursuit Denoising(MODIFIED-BPDN) for Noisy Compressive Sensing with Partially Known Support
aa r X i v : . [ c s . I T ] O c t Modified Basis Pursuit Denoising(Modified-BPDN)for noisy compressive sensing with partially knownsupport
Wei Lu and Namrata VaswaniDepartment of Electrical and Computer Engineering, Iowa State University, Ames, IA { luwei,namrata } @iastate.edu Abstract — In this work, we study the problem of reconstructinga sparse signal from a limited number of linear ‘incoherent’noisy measurements, when a part of its support is known.The known part of the support may be available from priorknowledge or from the previous time instant (in applicationsrequiring recursive reconstruction of a time sequence of sparsesignals, e.g. dynamic MRI). We study a modification of BasisPursuit Denoising (BPDN) and bound its reconstruction error.A key feature of our work is that the bounds that we obtainare computable. Hence, we are able to use Monte Carlo tostudy their average behavior as the size of the unknown supportincreases. We also demonstrate that when the unknown supportsize is small, modified-BPDN bounds are much tighter than thosefor BPDN, and hold under much weaker sufficient conditions(require fewer measurements).
Index Terms — Compressive sensing, Sparse reconstruction
I. I
NTRODUCTION
In this work, we study the problem of reconstructing asparse signal from a limited number of linear ‘incoherent’noisy measurements, when a part of its support is known .In practical applications, this may be obtained from priorknowledge, e.g. it can be the lowest subband of waveletcoefficients for medical images which are sparse in the waveletbasis. Alternatively when reconstructing time sequences ofsparse signals, e.g. in a real-time dynamic MRI application, itcould be the support estimate from the previous time instant.In [3], we introduced modified-CS for the noiseless mea-surements’ case. Sufficient conditions for exact reconstructionwere derived and it was argued that these are much weakerthan those needed for CS. Modified-CS-residual, which com-bines the modified-CS idea with CS on LS residual (LS-CS) [5], was introduced for noisy measurements in [4] fora real-time dynamic MRI reconstruction application. In thispaper, we bound the recosntruction error of a simpler specialcase of modified-CS-residual, which we call modified-BPDN.We use a strategy similar to the results of [2] to bound thereconstruction error and hence, just like in [2], the bounds weobtain are computable . We are thus able to use Monte Carlo tostudy the average behavior of the reconstruction error boundas the size of the unknown support, ∆ , increases or as thesize of the support itself, N , increases. We also demonstratethat modified-BPDN bounds are much smaller than those forBPDN (which corresponds to | ∆ | = | N | ) and hold under muchweaker sufficient conditions (require fewer measurements). In parallel and independent work recently posted on Arxiv,[7] also proposed an approach related to modified-BPDNand bounded its error. Their bounds are based on Candes’results and hence are not computable. Other related workincludes [8] (which focusses on the time series case and mostlystudies the time-invariant support case) and [9] (studies thenoiseless measurements’ case and assumes probabilistic priorknowledge). A. Problem definition
We obtain an n -length measurement vector y by y = Ax + w (1)Our problem is to reconstruct the m -length sparse signal x from the measurement y with m > n . The measurement isobtained from an n × m measurement matrix A and corruptedby a n -length vector noise w . The support of x denoted as N consists of three parts: N , T ∪ ∆ \ ∆ e where ∆ and T aredisjoint and ∆ e ⊆ T . T is the known part of support while ∆ e is the error in the known part of support and ∆ is theunknown part. We also define N e , T ∪ ∆ = N ∪ ∆ e . Notation:
We use ′ for conjugate transpose. For any set T and vector b , we have ( b ) T to denote a sub-vector containingthe elements of b with indices in T . k b k k means the l k normof the vector b . T c denotes the complement of set T and ∅ isthe empty set. For the matrix A , A T denotes the sub-matrix byextracting columns of A with indices in T . The matrix norm k A k p , is defined as k A k p , max x =0 k Ax k p k x k p We also define δ S to be the S -restricted isometry constantand θ S,S ′ to be the S, S ′ restricted orthogonality constant asin [6]. II. B OUNDING MODIFIED -BPDNIn this section, we introduce modified-BPDN and derive thebound for its reconstruction error.
A. Modified-BPDN
In [3], equation (5) gives the modified-CS algorithm undernoiseless measurements. We relax the equality constraint ofthis equation to propose modified-BPDN algorithm using amodification of the BPDN idea[1]. We solve min b k y − Ab k + γ k b T c k (2)Then the solution to this convex optimization problem ˆ x willbe the reconstructed signal of the problem. In the followingtwo subsections, we bound the reconstruction error. B. Bound of reconstruction error
We now bound the reconstruction error. We use a strategysimilar to [2]. We define the function L ( b ) = 12 k y − Ab k + γ k b T c k (3)Look at the solution of the problem (2) over all vectorssupported on N e . If A N e has full column rank, the function L ( b ) is strictly convex when minimizing it over all b supportedon N e and then it will have a unique minimizer. We denotethe unique minimizer of function L ( b ) over all b supported on N e as ˜ b = [˜ b ′ N e ′ N ce ] (4)Also, we denote the genie-aided least square estimate sup-ported on N e as c := [ c ′ N e ′ N ce ] where c N e := ( A ′ N e A N e ) − A ′ N e y (5)Since k c − x k ≤ k w k √ − δ | Ne | is quite small if noise is small and δ | N e | is small, we just give the error bound for ˜ b with respectto c in the following lemma and will prove that it is also theglobal unique minimizer under some sufficient condition. Lemma 1:
Suppose that A N e has full column rank, and let ˜ b minimize the function L ( b ) over all vectors supported on N e . We have the following conclusions:1) A necessary and sufficient condition for ˜ b to be theunique minimizer is that c N e − ˜ b N e = (cid:20) − γ ( A ′ T A T ) − A ′ T A ∆ ( A ′ ∆ M A ∆ ) − g ∆ γ ( A ′ ∆ M A ∆ ) − g ∆ ) (cid:21) where M , I − A T ( A ′ T A T ) − A ′ T and g ∈ ∂ ( k b T c k ) | b =˜ b . ∂ ( k b T c k ) is the subgradient set of k b T c k . Thus, g T = 0 and k g ∆ k ∞ = 1 .2) Error bound in l ∞ norm k ˜ b − c k ∞ ≤ γ max( k ( A ′ T A T ) − A ′ T A ∆ ( A ′ ∆ M A ∆ ) − k ∞ , k ( A ′ ∆ M A ∆ ) − k ∞ ) (6)3) Error bound in l norm k ˜ b − c k ≤ γ p | ∆ | · q k ( A ′ T A T ) − A ′ T A ∆ ( A ′ ∆ M A − ) k + k ( A ′ ∆ M A ∆ ) − k ≤ γ p | ∆ | s θ | T | , | ∆ | (1 − δ | T | ) + 1 · − δ | ∆ | − θ | ∆ | , | T | − δ | T | The proof is given in the Appendix. Next, we obtain sufficient condition under which ˜ b is alsothe unique global minimizer of L ( b ) . Lemma 2:
If the following condition is satisfied, then theproblem (2) has a unique minimizer which is equal to ˜ b definedin (4). k A ′ ( y − A N e c N e ) k ∞ < γ (cid:2) − max ω / ∈ N e k ( A ′ ∆ M A ∆ ) − A ′ ∆ M A ω k (cid:3) The proof of Lemma 2 is in the appendix.Combining Lemma 1 and 2 and bounding k c − x k ,we getthe following Theorem: Theorem 1: If A N e has full column rank and the followingcondition is satisfied k A ′ ( y − A N e c N e ) k ∞ < γ (cid:2) − max ω / ∈ N e k ( A ′ ∆ M A ∆ ) − A ′ ∆ M A ω k (cid:3) (7)then,1) Problem (2) has a unique minimizer ˜ b and it is supportedon N e .2) The unique minimizer ˜ b satisfies k ˜ b − x k ∞ ≤ γ max( k ( A ′ T A T ) − A ′ T A ∆ ( A ′ ∆ M A ∆ ) − k ∞ , k ( A ′ ∆ M A ∆ ) − k ∞ ) + k ( A ′ N e A N e ) − A ′ N e k ∞ k w k ∞ (8)and k ˜ b − x k ≤ k ( A ′ N e A N e ) − A ′ N e k k w || + γ p | ∆ |· q k ( A ′ T A T ) − A ′ T A ∆ ( A ′ ∆ M A − ) k + k ( A ′ ∆ M A ∆ ) − k (9) ≤ γ p | ∆ | s θ | T | , | ∆ | (1 − δ | T | ) + 1 · − δ | ∆ | − θ | ∆ | , | T | − δ | T | + k w k p − δ | N e | (10)Now consider BPDN. From theorem 8 of [2](the same thingalso follows by setting T = ∅ in our result), if A N has fullrank and if k A ′ ( y − A N ( A ′ N A N ) − A ′ N y ) k ∞ < γ [1 − max ω / ∈ N k ( A ′ N A N ) − A ′ N A ω k ] (11)then ˜ b BP DN k ˜ b BP DN − x k ∞ ≤ γ k ( A ′ N A N ) − k ∞ + k ( A ′ N A N ) − A ′ N k ∞ k w k ∞ (12)Similarly, we can have the l norm bound of BPDN is k ˜ b BP DN − x k ≤ γ p | N | − δ | N | + k w k p − δ | N | (13)Compare (10) and (13) for the case when | ∆ | = | ∆ e | = | N | (follows from [4]), the second terms are mostly equal.Consider an example assuming that δ | N | = 0 . , δ | ∆ | = 0 . , θ | T | , | ∆ | = 0 . and | ∆ | = | N | which is practical inreal data. Then the bound for BPDN is γ BP DN p | N | +0 . || w || and the bound for modified-BPDN approximatesto . γ modBP DN | ∆ | + 0 . || w || . Using a similar argument, γ modBP DN which is the smallest γ satisfying (7), will besmaller than γ BP DN which is the smallest γ satisfying(11). Since | ∆ | = | N | and γ BP DN will be larger than γ modBP DN , the bound for modified-BPDN will be muchsmaller than that of BPDN. This is one example, but we do adetailed simulation comparison in the next section using thecomputable version of the bounds given in (8) and (9). III. S
IMULATION R ESULTS
In this section, we compare both the computable l ∞ and l norm bounds for modified-BPDN with those of BPDN usingMonte Carlo simulation. Note that, BPDN is a special case ofmodified-BPDN when ∆ = N and ∆ e = ∅ . Therefore, we dothe following simulation to check the change of error boundwhen | ∆ | increases and compare the bounds of modified-BPDN with those of BPDN.We do the simulation as follows:1) Fix m = 1024 and size of support | N | .2) Select n , | ∆ | and | ∆ e | .3) Generate the n × m random-Gaussian matrix, A (gen-erate an n × m matrix with i.i.d. zero mean Gaussianentries and normalize each column to unit ℓ norm).4) Repeat the following tot = 50 timesa) Generate the support, N , of size | N | , uniformly atrandom from [1 : m ] .b) Generate the nonzero elements of the sparse signal x on the support N with i.i.d Gaussian distributedentries with zero mean and variance 100. Thengenerate a random i.i.d Gaussian noise w with zeromean and variance σ w . Compute y := Ax + w .c) Generate the unknown part of support, ∆ , of size | ∆ | uniformly at random from the elements of N .d) Generate the error in known part of support, ∆ e ,of size | ∆ e | , uniformly at random from [1 : m ] \ N e) Use T = N ∪ ∆ e \ ∆ to compute γ ∗ by γ ∗ = k A ′ ( y − A N e c N e ) k ∞ − max ω / ∈ N e k ( A ′ ∆ M A ∆ ) − A ′ ∆ M A ω k and do reconstruction with γ = γ ∗ using modified-BPDN to obtain ˆ x modBP DN .f) Compute the reconstruction error k ˆ x modBP DN − c k ∞ g) Compute the l ∞ norm bound from (6) and the l norm bound from (9).5) Compute the average bounds and average error for thegiven n , | ∆ | , | ∆ e | .6) Repeat for various values of n , | ∆ | and | ∆ e | .Fig.1 shows the average bound(RHS of (9)) for different | ∆ | when | N | = 100 ≈ m which is practical for real data as in[3], [4]. The noise variance is σ w = 0 . . We show plots fordifferent choice of n . The case | ∆ || N | = 1 in Fig. 1 correspondsto BPDN. From the figures, we can observe that when | ∆ | increases, the bounds are increasing. One thing needed to bementioned is that for BPDN( ∆ = N, ∆ e = ∅ ) in this case,the RHS of (7) is negative and the bound can only hold whennumber of measurements n ≥ . m . Therefore, BPDN isdifficult to meet the unique minimizer condition when | N | increases to . m . However, when | ∆ | is small, modified-BPDN can easily satisfy the condition, even with very fewmeasurements( n = 0 . m when | ∆ | = 0 . | N | ). Hence, thesufficient conditions for modified-BPDN require much fewermeasurements than those for BPDN when | ∆ | is small. Fig.2gives another group of results showing average bound(RHS of(9)) for different | ∆ | when | N | = 15 ≈ . m . The noisevariance is σ w = 0 . and ∆ e = ∅ . We can also obtain ∆ |/|N| B o und o n || ˜ b − x || |N|=100,| ∆ e |=0 average bound n=0.2maverage bound n=0.4maverage bound n=0.7m (a) | N | = 100 , ∆ e = ∅ ∆ |/|N| B o und o n || ˜ b − x || |N|=100,| ∆ e |=10 average bound n=0.2maverage bound n=0.4maverage bound n=0.7m (b) | N | = 100 , | ∆ e | = | N | Fig. 1.
The average bound(9) on || ˜ b − x || is plotted. Signallength m = 1024 and support size | N | = 100 . For fixed n and | ∆ e | , the bound increases when | ∆ | increases. When number ofmeasurements n increases, the bound decreases. When n = 0 . m and | ∆ | ≥ . | N | , the RHS of (7) is negative and thus the bounddoes not hold. We do not plot the case of BPDN( ∆ = N, ∆ e = ∅ )since it requires n ≥ . m measurements to make RHS of (7)positive. the same conclusions as Fig.1. Note that we do not plot theaverage error and bound for | ∆ | ≥ | N | when n = 0 . m since the RHS of (7) is negative and thus the bound doesnot hold. Hence, the more we know the support, the fewermeasurements modified-BPDN requires.In this case, we also compute the average error and thebound (6) on k ˜ b − c k ∞ . Since | N e | = 15 is small and noise issmall k c − x k ∞ will be small and equal for any choice of | ∆ | .Thus we just compare k ˜ b − c k ∞ with its upper bound givenin (6). For the error and bound on k ˜ b − c k ∞ , when we fix n = 0 . m and ∆ e = ∅ , the error and the bound are both 0 for | ∆ | = 0 which verifies that the unique minimizer is equal tothe genie-aided least square estimation on support N in thiscase. For | ∆ | = | N | , the error is . and the bound is . .For | ∆ | = | N | , the error is . and the bound is . . When | ∆ | = | N | which corresponds to BPDN in this case, the errorincreases to . and the bound increases to . Therefore, wecan observe that when | ∆ | increases, both the error and thebound are increasing. Also, we can see the gap between errorand bound(gap=bound-error) increases with | ∆ | .From the simulation results, we conclude as follows:1) The error and bound increase as | ∆ | increases.2) The error and bound increase as | N | increases.3) The gap between the error and bound increases as | ∆ | ∆ |/|N| B o und o n || ˜ b − x || |N|=15,| ∆ e |=0 average bound n=0.2maverage bound n=0.3maverage bound n=0.5m Fig. 2.
The average bound(9) on || ˜ b − x || is plotted. Signal length m = 1024 , support size | N | = 15 and | ∆ e | = 0 . For fixed n , thebound on || ˜ b − x || increases when | ∆ | increases. When number ofmeasurements n increases, the bound decreases. When n = 0 . m and | ∆ | ≥ | N | , the RHS of (7) is negative and thus the bound doesnot hold. increases.4) The error and bound decrease as n increases.5) For real data, | N | ≈ . m . In this case, BPDN needs n ≥ . m to apply the bound while modified-BPDNcan much easily to apply its bound under very small n .6) When n is large enough, e.g. n = 0 . m for | N | = 15 =15% m , the bounds are almost equal for all values of | ∆ | (the black plot of Fig. 2) including | ∆ | = | N | (BPDN).IV. C ONCLUSIONS
We proposed a modification of the BPDN idea, calledmodified-BPDN, for sparse reconstruction from noisy mea-surements when a part of the support is known, and boundedits reconstruction error. A key feature of our work is that thebounds that we obtain are computable. Hence we are ableto use Monte Carlo to show that the average value of thebound increases as the unknown support size or the size ofthe error in the known support increases. We are also able tocompare with the BPDN bound and show that (a) for practicalsupport sizes (equal to 10% of signal size it holds under verystrong assumptions (require more than 95% random Gaussianmeasurements for the bound to hold) and (b) for smallersupport sizes (e.g. 1.5% of signal size), the BPDN bound ismuch larger than the modified-BPDN bound.V.
APPENDIX
A. Proof of Lemma 1
Suppose supp(b) ⊆ N e . We know the vectors y − Ac = y − A N e c N e and Ac − Ab = A N e ( b N e − c N e ) are orthogonalbecause A ′ N e ( y − A N e c N e ) = 0 using (5). Thus we minimizefunction L ( b ) over all vectors supported on set N e by mini-mizing: F ( b ) = 12 k A N e c N e − A N e b N e k + γ k b T c k (14)Since this function is strictly convex, then ∈ ∂F (˜ b ) . Hence, A ′ N e A N e ˜ b N e − A ′ N e A N e c N e + γg N e = 0 (15) Then, we have c N e − ˜ b N e = γ ( A ′ N e A N e ) − g N e (16)Since A ′ N e A N e = (cid:20) A ′ T A T A ′ T A ∆ A ′ ∆ A T A ′ ∆ A ∆ (cid:21) By using the block matrix inversion and g T = 0 , we get c N e − ˜ b N e = (cid:20) − γ ( A ′ T A T ) − A ′ T A ∆ ( A ′ ∆ M A ∆ ) − g ∆ γ ( A ′ ∆ M A ∆ ) − g ∆ ) (cid:21) Thus, we can obtain the l ∞ norm bound of error as below: k ˜ b N e − c N e k ∞ = γ k ( A ′ N e A N e ) − g N e k ∞ ≤ γ max( k ( A ′ T A T ) − A ′ T A ∆ ( A ′ ∆ M A ∆ ) − g ∆ k ∞ , k ( A ′ ∆ M A ∆ ) − g ∆ k ∞ ) ≤ γ max( k ( A ′ T A T ) − A ′ T A ∆ ( A ′ ∆ M A ∆ ) − k ∞ , k ( A ′ ∆ M A ∆ ) − k ∞ ) This follows using k g ∆ k ∞ = 1 . Also, using k g ∆ k ≤ p | ∆ | ,we get the l norm bound of ˜ b − c .Using k ( A ′ T A T ) − k ≤ − δ | T | , k A ′ ∆ A ∆ k ≥ − δ | ∆ | and k A ′ T A ∆ k ≤ θ | T | , | ∆ | , we get (10). B. Proof of Lemma 2
Suppose that A N e has full column rank, and let ˜ b minimizethe function L ( b ) over all b supported on N e = T ∪ ∆ . We needto prove under this condition, ˜ b is the unique global minimizerof L ( b ) .The idea is to prove under the given condition, any smallperturbation h on ˜ b will increase function L (˜ b ) ,i.e. L (˜ b + h ) − L (˜ b ) > , ∀|| h || ∞ ≤ δ for δ small enough. Since L ( b ) is aconvex function, ˜ b should be the unique global minimizer.Similar to [2], we first split the perturbation into two parts h = u + v where supp ( u ) = N e and supp ( v ) = N ce . Clearly || u || ∞ ≤ || h || ∞ ≤ δ . Then we have L (˜ b + h ) = 12 || y − A (˜ b + u ) − Av || + γ || (˜ b + u ) T c + v T c || (17)Then expand the first term, we can obtain k y − A (˜ b + u ) − Av k = k y − A (˜ b + u ) k + k Av k − Re h y − A ˜ b, Av i + 2 Re h Au, Av i (18)The second term of (17) becomes k (˜ b + u ) T c + v T c k = k (˜ b + u ) T c k + k v T c k (19)Then we have L (˜ b + h ) − L (˜ b ) = L (˜ b + u ) − L (˜ b ) + 12 k Av k − Re h y − A ˜ b, Av i + Re h Au, Av i + γ k v T c k (20)Since ˜ b minimizes L ( b ) over all vectors supported on N e , L (˜ b + u ) − L (˜ b ) ≥ . Then since L (˜ b + u ) − L (˜ b ) ≥ and k Av k ≥ , we need to prove that the rest arenon-negative: γ k v T c k − Re h y − A ˜ b, Av i + Re h Au, Av i ≥ . Instead, we can prove this by proving a stronger one γ k v T c k − |h y − A ˜ b, Av i| − |h Au, Av i| ≥ .Since h y − A ˜ b, Av i = v ′ A ′ ( y − A ˜ b ) and supp ( v ) = N ce , |h y − A ˜ b, Av i| = | v ′ N ce A ′ N ce ( y − A ˜ b ) | ≤ k v k k A N ce ( y − A ˜ b ) k ∞ Thus, |h y − A ˜ b, Av i| ≤ max ω / ∈ N e |h y − A ˜ b, A ω i||| v || (21)The third term of (17) can be written as |h Au, Av i| ≤ k A ′ Au k ∞ || v || ≤ δ k A ′ A k ∞ || v || (22)And k v k = k v T c k since supp ( v ) = N ce ⊆ T c . Therefore, L (˜ b + h ) − L (˜ b ) ≥ (cid:2) γ − max ω / ∈ N e |h y − A ˜ b, A ω i|− δ || A ′ A || ∞ (cid:3) || v || (23)Since we can select δ > as small as possible, then we justneed to have γ − max ω / ∈ N e |h y − A ˜ b, A ω i| > (24)Invoke Lemma 1, we have A N e ( c N e − ˜ b N e ) = γM A ∆ ( A ′ ∆ M A ∆ ) − g ∆ . Since y − A ˜ b = ( y − A N e c N e ) + A N e ( c N e − ˜ b N e ) , therefore, |h y − A ˜ b, A ω i| ≤ |h y − A N e c N e , A ω i| + γ |h ( A ′ ∆ M A ∆ ) − A ′ ∆ M A ω , g ∆ i| (25)Then we only need to have the condition γ − max ω / ∈ N e (cid:2) γ |h ( A ′ ∆ M A ∆ ) − A ′ ∆ M A ω , g ∆ i| + |h y − A N e c N e , A ω i| (cid:3) > (26)Since y − A N e c N e is orthogonal to A w for each ω ∈ N e , then max ω / ∈ N e |h y − A N e c N e , A ω i| = k A ′ ( y − A N e c N e ) k ∞ . Also,we know that max ω / ∈ N e |h ( A ′ ∆ M A ∆ ) − A ′ ∆ M A ω , g ∆ i| ≤ max ω / ∈ N e k ( A ′ ∆ M A ∆ ) − A ′ ∆ M A ω k (cid:3) . Thus, (26) holds if thefollowing condition holds || A ′ ( y − A N e c N e ) || ∞ < γ (cid:2) − max ω / ∈ N e || ( A ′ ∆ M A ∆ ) − A ′ ∆ M A ω || (cid:3) (27)i.e. ˜ b is the unique global minimizer if (27) holds.R EFERENCES [1] S. S. Chen, D. L. Donoho, and M. A. Saunders,
Atomic decom-position by basis pursuit , SIAM J. Sci. Comput., vol. 20, no.1, pp. 33-61, 1999.[2] Joel A. Tropp,
Just Relax: Convex Programming Methodsfor Identifying Sparse Signals in Noise , IEEE Trans. onInformation Theory, 52(3), pp. 1030 - 1051, March 2006.[3] Namrata Vaswani and Wei Lu,
Modified-CS: Modifying Com-pressive Sensing for Problems with Partially Known Support ,IEEE Intl. Symp. Info. Theory (ISIT), 2009[4] Wei Lu and Namrata Vaswani,
Modified Compressive Sensingfor Real-time Dynamic MR Imaging , IEEE Intl. Conf. ImageProc (ICIP), 2009[5] Namrata Vaswani,
Analyzing Least Squares and Kalman Fil-tered Compressed Sensing , IEEE Intl. Conf. Acous. Speech.Sig. Proc. (ICASSP), 2009.[6] E. Candes and T. Tao.
Decoding by Linear Programming , IEEETrans. Info. Th., 51(12):4203 - 4215, Dec. 2005.[7] L. Jacques,
A short Note on Compressed Sensing with PartiallyKnown Signal Support , Arxiv preprint arXiv:0908.0660v1,2009. [8] D. Angelosante, E. Grossi, G. B. Giannakis,
Compressed Sens-ing of time-varying signals , DSP 2009[9] A. Khajehnejad, W. Xu, A. Avestimehr, B. Hassibi,