A Novel Bayesian Approach for the Two-Dimensional Harmonic Retrieval Problem
aa r X i v : . [ ee ss . SP ] F e b A NOVEL BAYESIAN APPROACH FOR THE TWO-DIMENSIONAL HARMONICRETRIEVAL PROBLEM
Rohan R. Pote and Bhaskar D. Rao
Department of Electrical and Computer EngineeringUniversity of California San Diego
ABSTRACT
Sparse signal recovery algorithms like sparse Bayesian learningwork well but the complexity quickly grows when tackling higherdimensional parametric dictionaries. In this work we propose anovel Bayesian strategy to address the two dimensional harmonicretrieval problem, through remodeling and reparameterization ofthe standard data model. This new model allows us to introduce ablock sparsity structure in a manner that enables a natural pairing ofthe parameters in the two dimensions. The numerical simulationsdemonstrate that the inference algorithm developed (H-MSBL) doesnot suffer from source identifiability issues and is capable of esti-mating the harmonic components in challenging scenarios, whilemaintaining a low computational complexity.
Index Terms — sparse Bayesian learning, multidimensional har-monic retrieval, block sparsity, rectangular arrays, direction of ar-rival estimation
1. INTRODUCTION
Many signal processing applications involve solving a multidimen-sional harmonic retrieval (MHR) problem. Examples include MIMOradar [1], MIMO wireless channel sounding [2], and nuclear mag-netic resonance spectroscopy [3]. In addition to resolution problems,a new challenge for methods in 2-D is that though the pairs are dis-tinct, there could be harmonics with the same value in one dimen-sion, i.e. ( a, b ) and ( a, c ) . To address the MHR problem, many tech-niques have been proposed in the past [4–12], as well as efforts weremade to gain a solid theoretical understanding [13, 14]. Subspacebased methods for solving the MHR problem include multiple signalclassification (MUSIC) [9, 10], estimation of signal parameters viarotational invariance technique (ESPRIT) [6–8], rank reduction es-timator (RARE) [5], multi-dimensional folding (MDF) [2] and ma-trix pencil [4]. Many of these methods rely on the assumption ofuncorrelated sources, and their performance degrades when the as-sumption does not hold. Also techniques like ESPRIT are applicableto uniform array geometries only. Approaches like [5, 15] extend 1-D techniques to solve higher dimensional estimation problem, suchthat an additional coupling of parameters in each dimension needsto solved. On the other hand, techniques in [8, 12] allow a naturalpairing of the two sets of parameters.In this work we approach the two-dimensional harmonic re-trieval problem using sparse signal recovery (SSR) tools. SSRmethods allow one to reconstruct a spectrally sparse signal usinghighly incomplete temporal or spatial data [16, 17]. In particular,
This research was supported by ONR Grant No. N00014-18-1-2038 andthe UCSD Center for Wireless Communications. in this paper we address the higher computational complexity asso-ciated with the sparse Bayesian learning (SBL) algorithm [18, 19]when the underlying parameter is multi-dimensional. Contributionsin the paper are as follows:• A new Bayesian strategy with reduced complexity based on novel remodeling and reparameterization of the data model is proposed.• An approach that induces a natural coupling of the parameter inthe two dimensions by introducing block sparsity .• A solution to the two dimensional HR problem using a 1-D grid;learned block structure allows estimating the parameters in otherdimension in a gridless manner.• Although we solve the problem with a grid in one dimension, wenumerically show that our method does not suffer from poor iden-tifiability .In section 2 we introduce the system model and discuss some back-ground on the problem at hand. The proposed method is discussed insection 3. Finally in section 4, we provide numerical results whichsupport the usefulness of the proposed technique.
2. SYSTEM MODEL AND BACKGROUND2.1. System Model
Consider a URA with N x sensors along the x axis and N y sen-sors along the y axis. The inter-element spacing used in the arrayis assumed to be half-wavelength along both the axes, and L mea-surements or snapshots are collected. The received measurementsare composed of K narrowband source signals. We assume thatsignals impinging on the array come from a finite 2D grid of ele-vation ( θ k ∈ [0 , ) and azimuth ( φ k ∈ [0 , ) angles, where k ∈ { , . . . , K } , corresponding to the k th source. The receivedmeasurement at the ( n x , n y ) sensor, at the l th instant is given by( ≤ n x < N x , ≤ n y < N y ; n x , n y ∈ Z ) ¯ Y l [ n x , n y ] = K X k =1 s k,l exp ( jπ ( n x u k + n y v k ))+ ¯ V l [ n x , n y ] , (1)where u k = cos φ k sin θ k , v k = sin φ k sin θ k ( u k , v k ∈ [ − , ,u k + v k ≤ ). In the above equation, s k,l denotes the transmittedsymbol by the k th source, and ¯ V l [ n x , n y ] denotes the noise at the l thinstant modeled as white circular Gaussian. Eq. (1) can be re-writtenin matrix form as: ¯ Y l = K X k =1 s k,l Φ u,k Φ Tv,k + ¯ V l (0 ≤ l < L ) (2) = Φ u S l Φ Tv + ¯ V l . (3)n eq. (2), Φ u,k = [1 , exp ( jπu k ) , . . . , exp ( jπ ( N x − u k )] T and Φ v,k = [1 , exp ( jπv k ) , . . . , exp ( jπ ( N y − v k )] T represent thearray response vectors to the k th source along the x and y axes, re-spectively. Let the grid sizes in the u and v -spaces be M u and M v respectively. In eq. (3) we introduce the corresponding dictionarymatrices Φ u ∈ C N x × M u and Φ v ∈ C N y × M v in the u and v spacerespectively. The symbol matrix S l has at most K non-zero entriescorresponding to the K source symbols. In addition to sources lo-cated on-grid we also assume the notion of common-sparsity alongthe snapshots, i.e. the source locations ( u k , v k ) although unknown,remain fixed over time. Consequently, the support of the symbolmatrix S l is constant.We shall use the above model throughout this paper, as (uniform)grid definitions in ( u, v ) − space often lead to less coherent dictionar-ies than those in ( θ, φ ) − space. The entries in S l are assumed to bezero mean and uncorrelated, and the matrices are generated as i.i.d.over the snapshots. A straightforward way to recover the support i.e., the K source lo-cations is to extend an off-the-shelf sparse signal recovery algorithmto handle 2D-grid points. For example, we can take the transpose ofthe received snapshot and vectorize the result to get ( ≤ l < L ) y l = vec( ¯ Y Tl ) = ( Φ u ⊗ Φ v )vec( S Tl ) + vec( ¯ V Tl ) . (4)We now concatenate the L snapshots together to get Y =( Φ u ⊗ Φ v ) (cid:2) vec( S T ) vec( S T ) . . . vec( S TL − ) (cid:3) + (cid:2) vec( ¯ V T ) vec( ¯ V T ) . . . vec( ¯ V TL − ) (cid:3) =( Φ u ⊗ Φ v ) ¯ X + V . (5)Thus, the resulting dictionary to be defined is simply the Kroneckerproduct of the 1-D dictionaries. However such an extension quicklyincreases the computational cost. The complexity per iteration us-ing the MSBL algorithm [18] is O ( N x N y × M u M v ) i.e. roughlyquadratic in the 1-D grid size. The goal of our work is to reduce thiscomplexity significantly while retaining the superior source identifi-ability characteristics by preserving the dimension of the measure-ments. The dimension preserving requirement allows us to identifymore sources than what is possible by essentially solving two 1-Dproblems and then coupling the resulting 1-D solutions. This method employs one dictionary at a time, and each timeconsiders measurements along one axis and concatenates dataalong the other axis. For estimating the u components, insteadof vectorizing the matrix of measurements obtained at each snap-shot, one simply works with the following concatenated matrix (cid:2) ¯ Y , ¯ Y , . . . , ¯ Y L − (cid:3) . When estimating the v components, oneutilizes ¯ Y Tl . This method although keeps the computational com-plexity low, but it suffers from poor identifiability i.e. the numberof sources that are recovered. This is because the dimension of thevector measurements is either N x or N y , much smaller than N x N y used in the Kronecker product based approach. It also treats the dataalong the other dimension as independent snapshots which is nottrue for the columns within a snapshot (matrix measurement) andleads to sub-optimal results.
3. PROPOSED BAYESIAN STRATEGY
The goal of this work is to provide a solution with bounded complex-ity, much lower than using Kronecker product of dictionaries, whileexploiting the rich structure in the received measurements. The basicidea in order to lower the computational complexity is to essentiallysolve for one parameter using a 1-D dictionary, and at the same timeexploiting additional structure to simultaneously infer the coupledparameter in the second dimension.
We begin with rewriting eq. (5) and simplifying it further. Y =( Φ u ⊗ Φ v ) ¯ X + V = (cid:0) Φ u ⊗ I N y (cid:1) ( I M u ⊗ Φ v ) ¯ X + V (6) = (cid:0) Φ u ⊗ I N y (cid:1) X + V (for some X = ( I M u ⊗ Φ v ) ¯ X ) , (7)where in eq. (6) we use the result on Kronecker product: AC ⊗ BD = ( A ⊗ B ) ( C ⊗ D ) . Equation (7) defines our new modeland an important contribution of this work. Our strategy is to treat D u = Φ u ⊗ I N y as the effective dictionary and to first estimate the u ’s. Note that the row dimension of D u = Φ u ⊗ I N y is the sameas in MSBL and this allows the source identifiability properties to beretained. The number of columns is however much smaller leadingto the low complexity aspect of the approach.Another important contribution and unique feature of themethod is the estimation of the components in the second dimension.The assumptions made on the symbol matrices S l , ≤ l < L trans-late to the fact that each column of ¯ X has uncorrelated components,and the columns themselves are i.i.d.. On the other hand, consider(using MATLAB notations below) X i = X (( i − N y + 1 : iN y , :) , ( i ∈ { , . . . , M u } )= Φ v ¯ X (( i − M v + 1 : iM v , :) , (8)which implies that each such block, X i , has correlated componentsalong each column. Let x il ∈ C N y denote the l th column of the i th block, and let x l = (cid:2) ( x l ) T , . . . , ( x M u l ) T (cid:3) T denote the l th col-umn of X . Then eq. (8) implies that x il has correlated compo-nents. Hence, we conclude that X in the new model i.e. in eq. (7)is block sparse . Also, the correlation within each block is struc-tured , and influenced by the coupled parameters in the other di-mension i.e. v . We dissolve the parameter v from the originalmodel (eq. (5)) completely, and instead track the correlation matri-ces R x ,i = E (cid:2) x il ( x il ) H (cid:3) , where E[ . ] denotes expectation operation.This reparameterization allows us to estimate the coupled v i ’s, in agridless manner. Our approach follows a maximum-likelihood (ML)principle, although for the sake of simplicity we do not impose theadditional (Toeplitz) correlation structure here and show through nu-merical simulations that this is adequate. As per our observation in eq. (8), we introduce a Gaussian block prior on x il , i ∈ { , . . . , M u } as p ( x il ; γ i , B i ) ∼ N ( , γ i B i ) , (0 ≤ l < L ) (9)such that there is no inter-block correlation. Thus the prior on x l isgiven by p ( x l ; γ i , B i , ∀ i ) ∼ N ( , Σ ) , here, Σ = γ B . .. γ M u B M u . (10)Notice that the prior does not depend on l as the columns are i.i.dover the snapshots. Our formulation here induces a targeted blockstructure, unlike in T-SBL [20] where a temporal correlation struc-ture exists over all the snapshots. Our approach here is valid for 2-Dharmonic retrieval problems and also extends more generally to theMHR problem, as long as the snapshots are independent. The modelparameters are estimated using a ML approach and the negative ofthe log-likelihood function is given by − log p ( Y ; λ, γ i , B i , ∀ i ) = log det Σ y + tr (cid:16) Σ y − ˆ S y (cid:17) , (11)where Σ y , D u Σ D Hu + λ I , λ denotes the noise variance pa-rameter to be estimated, and ˆ S y = L P l y l y Hl denotes the samplecovariance matrix. The cost function in eq. (11) is non-convex and any majorization-minimization framework can be used to minimize the cost function.We here employ the Expectation-Maximization (EM) framework toeffectively maximize p ( Y ; λ, γ i , B i , ∀ i ) . We begin with estimatingthe posterior density p ( x l | y l ; λ ( k ) , γ ( k ) i , B ( k ) i , ∀ i ) based on thecurrent estimate of parameters Θ = [ λ, γ i , B i , ∀ i ] . We drop thesuperscript ( k ) denoting current iteration for notational simplicity. p ( x l | y l ; Θ ) ∼ N ( µ x ,l , Σ x ) , (12)where, µ x ,l = 1 λ Σ x D Hu y l (13) Σ x = Σ − Σ D Hu (cid:16) λ I N x N y + D u Σ D Hu (cid:17) − D u Σ . (14)Note that the mean of the posterior depends on the snapshot index, l . We concatenate the posterior mean over snapshots to form µ x =[ µ x , , . . . , µ x ,L − ] .We shall now proceed to updating the current estimate of theparameters in Θ . In contrast to the work in [20] where a singlecorrelation matrix B was rightfully motivated, here we take a dif-ferent approach. Since the B i ’s are expected to carry informationabout the coupled v i ’s corresponding to individual u i ’s, we do notenforce a single B matrix. This strategy is crucial when identifyingsources with distinct u i ’s. Following steps are similar to those fol-lowed in [20] and we here simply mention the update equations for γ i ’s, B i ’s and the noise variance estimate λ . γ i ← N y tr (cid:18) B − i (cid:18) Σ x i + 1 L µ i x (cid:16) µ i x (cid:17) H (cid:19)(cid:19) (15) B i ← γ i (cid:18) Σ x i + 1 L µ i x (cid:16) µ i x (cid:17) H (cid:19) , (16)where Σ x i = Σ x (( i − N y + 1 : iN y , ( i − N y + 1 : iN y ) and µ i x = µ x (( i − N y + 1 : iN y , :) , i ∈ { , . . . , M u } . Forupdating λ we use the following update equation: λ ← N x N y L k Y − D u µ x k F + λN x tr (cid:18) Φ u ΓΦ Hu (cid:16) Φ u ΓΦ Hu + λ I (cid:17) − (cid:19) (17) Note that putting N y = 1 in the above equations (except for λ )gives us the update equations for the EM-MSBL algorithm. In theabove, we used the notation, Γ = diag ( γ ) = diag([ γ , . . . , γ M u ]) .We denote the algorithm using the equations (13),(14),(15),(16) and(17) as H-MSBL, emphasizing that our approach is applicable to theHarmonic retrieval problem. A few practical remarks are in order. Remark 1:
We normalize the correlation matrices, B i ← B i / k B i k F , ∀ i , so as to separate the role of B i and γ i . The above update equa-tion for λ is different from what is derived using the EM algorithm.The latter was emprirically observed to be unstable. Thus, insteadwe replaced it with an adaptation of eq. (33) in [20] for high-SNRcase and also incorporated suggestions therein for low SNR casewhich produced stable updates. Remark 2:
Given the number of sources to be identified, the toppeaks in the recovered γ provide information about the u componentof the present sources. The corresponding B i ’s provide informationabout the coupled v component. Thus the need to separately esti-mate v ’s and coupling them with u ’s is prevented. In our simulation,we use root-MUSIC [10] to extract the coupled v -components. Remark 3:
An interesting challenging scenario in the 2-D HR prob-lem is when the distinct 2-D harmonics have the same value in onedimension. Methods have been designed that assume the harmonicsare distinct in both dimensions [21] and special care has to be takenin algorithm development to deal with the above mentioned chal-lenging scenario [15]. It will be shown numerically that H-MSBLworks even in this context when multiple sources share the same u or v grid point. The per iteration complexity of the MSBL algorithm applied to themodel in eq. (5) is O ( N x N y × M u M v ) . The same for the pro-posed H-MSBL algorithm is O ( N x N y × M u N y ) (or is O ( N x N y × M v N x ) ), if the grid is defined on u (or on v ). This is a significantreduction in the computational complexity, as both N x and N y arephysical array dimensions, and thus are much smaller than the cor-responding grid sizes. Like the MSBL algorithm, the complexityof the proposed algorithm does not depend on the number of snap-shots. This is because the (ML) cost function depends on the snap-shots only through the outer product YY H . To achieve the proposedcomplexity, one has to replace Y with ˜ Y ∈ C N x N y × rank( Y ) in theupdate equations, such that YY H = ˜ Y ˜ Y H , and only compute thediagonal blocks of Σ x of size N y × N y .The MSBL algorithm introduces a missing or latent variable, ¯ X ∈ C M u M v × N x N y which is more complex than the latent vari-able X ∈ C M u N y × N x N y we have introduced, both in the reduceddimension. On the other hand, the (ML) cost function in MSBL isdefined over γ ∈ C M u M v , whereas here the cost function is definedover Θ involving a total of M u ( N y + 1) parameters, ignoring thenoise variance parameter. The EM algorithm is known to convergeslowly if one chooses an overly informative complete data. In thenext section we numerically compare the rate of convergence of thetwo algorithms.
4. NUMERICAL SIMULATIONS
In this section, we compare the performance of the proposed H-MSBL algorithm to MSBL (using Kronecker product of 1-D dic-tionaries in eq. (5)). The MSBL is chosen because it is found to be this assumes M u > N x , which is usually the case. v-Grid size C o m pu t a t i on t i m e i n sec ond s MSBL-500MSBL-700MSBL-1000H-MSBL-500 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 u -1-0.8-0.6-0.4-0.200.20.40.60.81 v Source Loc.H-MSBL-500 MSBL-500MSBL-1000 (a) Computation time vs. v -grid size (b) Estimated ( u, v ) , v -grid size= Fig. 1 . H-MSBL vs. MSBL: N x = N y = 4 , K = 6 , SNR =20 dB , L = 50 , u -grid size = 100 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 u -1-0.8-0.6-0.4-0.200.20.40.60.81 v Source Loc.H-MSBL-500MSBL-2000
Fig. 2 . Comparison on source identifiability: N x = 3 , N y =6 , K = 10 , SNR = 20 dB , L = 50 , u, v -grid sizes = 100 eachcomparable, if not better than most algorithms, when applied to di-rection of arrival estimation. In addition, it is robust to impairmentssuch as source correlation [22]. Furthermore, a competing algorithmlike MUSIC also has the same complexity issues as MSBL in thesearch process. The main purpose of the work is to show that thelow complexity H-MSBL can compare favorably with state of theart algorithms and so the more complex MSBL is a good referencepoint for comparison.Unless otherwise specified the source location components, ( u k , v k ) , ∀ k , are distinct. The H-MSBL algorithm proceeds bydefining grid on the u component, and estimates v in a gridless man-ner. The number of EM-iterations performed are mentioned along-side the algorithm. For e.g., a MSBL curve using EM-iterationsis denoted by MSBL- . Note that, during the implementation, thecombined ( u, v ) -grid size is less than M u M v as points that violatethe constraint u + v ≤ are discarded from the dictionary, Φ . Allalgorithms are initialized with γ init = k YY H /L k F k ΦΦ H k F [1 , . . . , T . Allsimulation are carried out in MATLAB 9.4.0.813654 in a Windows10 system using a 2.7 GHz CPU. Experiment 1:
In this simulation, we wish to compare the compu-tational complexity of using the H-MSBL and MSBL algorithms.In fig. (1) we compare the computation time in seconds (left)and single instance ( u, v ) localization performance for H-MSBLand MSBL. As observed in the left plot, the computation time forMSBL grows with the size of the grid on the v component, M v . Onthe other hand, the same for the H-MSBL algorithm does not de-
100 200 300 400 500 600 700 800 900 1000
Iteration no. R M SE H-MSBLMSBL
Fig. 3 . Rate of convergence: N x = 4 , N y = 4 , K = 6 , SNR =20 dB , L = 50 , u, v -grid sizes = 100 eachpend on M v . For applications such as DoA estimation it is desirableto define a fine grid, but the number of EM-iterations required forconvergence also grows. This can be seen in the right plot. Here,H-MSBL is able to localize sources with high accuracy and lowcomplexity, with fewer iterations than MSBL. Note that (non-EM)MSBL implementations may lead to faster convergence, but thecomplexity will still grow with M v , unlike for H-MSBL. Experiment 2:
In this simulation we compare the performance ofthe algorithms when multiple sources share the same u/v grid point.In fig. (2), we consider a × array and let sources arrive froma grid of u -points and v -points. This is a challenging scenario aseven with EM iterations the MSBL estimates are slightly awayfrom true source locations. For H-MSBL, the v component for themultiple sources sharing the same u -grid point are estimated usingRoot-MUSIC on the recovered B i ∈ C × matrices. The number ofsources to be identified was provided. As can be seen, the proposedtechnique incurs no loss of source identifiability in this experiment. Experiment 3:
As pointed in section 3.4, the proposed H-MSBLalgorithm introduces a smaller dimensional latent variable X thanMSBL. In this simulation we compare the rate of convergence of thetwo algorithms.In fig. (3) we compare the root mean squared error (RMSE)over the EM iterations for the H-MSBL and MSBL algorithms. Thesquared error is computed as ( u k − ˆ u k ) + ( v k − ˆ v k ) , ∀ k and themean is taken over all sources. As seen from the plot, the proposedalgorithm has a faster rate of convergence. Note, the γ i ’s are prunedif the value falls below − for both the algorithms.
5. CONCLUSION
In this paper, we considered the two dimensional harmonic retrieval(HR) problem and highlighted the increased computational com-plexity requirements of the MSBL algorithm. We proposed a novelBayesian strategy by imposing a suitable block sparsity prior. Thisenables reduction in the effective grid size while maintaining the di-mensionality of the measurement space. Thus we are able to identifymany more sources than possible by purely 1-D approaches, at thesame time achieving reduced complexity. We numerically demon-strated the superior performance of the proposed algorithm. . REFERENCES [1] D. Nion and N. D. Sidiropoulos, “Tensor Algebra and Multidi-mensional Harmonic Retrieval in Signal processing for MIMORadar,”
IEEE Transactions on Signal Processing , vol. 58, no.11, pp. 5693–5705, 2010.[2] Xiangqian Liu, Nikos D. Sidiropoulos, and Tao Jiang,
Multi-dimensional Harmonic Retrieval with Applications in MIMOWireless Channel Sounding , chapter 2, pp. 41–75, John Wiley& Sons, Ltd, 2005.[3] Y. Li, J. Razavilar, and K. J. R. Liu, “A high-resolution tech-nique for multidimensional NMR spectroscopy,”
IEEE Trans-actions on Biomedical Engineering , vol. 45, no. 1, pp. 78–86,1998.[4] Y. Hua and T. K. Sarkar, “Matrix pencil method for estimat-ing parameters of exponentially damped/undamped sinusoidsin noise,”
IEEE Transactions on Acoustics, Speech, and SignalProcessing , vol. 38, no. 5, pp. 814–824, 1990.[5] M. Pesavento, C. S. Mecklenbrauker, and J. F. Bohme, “Multi-dimensional Rank Reduction Estimator for Parametric MIMOChannel Models,”
EURASIP Journal on Advances in SignalProcessing , vol. 839148, no. 2, pp. 316–328, 2004.[6] R. Roy and T. Kailath, “ESPRIT-estimation of signal param-eters via rotational invariance techniques,”
IEEE Transactionson Acoustics, Speech, and Signal Processing , vol. 37, no. 7, pp.984–995, 1989.[7] M. D. Zoltowski, M. Haardt, and C. P. Mathews, “Closed-form 2-d angle estimation with rectangular arrays in elementspace or beamspace via unitary ESPRIT,”
IEEE Transactionson Signal Processing , vol. 44, no. 2, pp. 316–328, 1996.[8] M. Haardt and J. A. Nossek, “Simultaneous Schur decom-position of several nonsymmetric matrices to achieve auto-matic pairing in multidimensional harmonic retrieval prob-lems,”
IEEE Transactions on Signal Processing , vol. 46, no. 1,pp. 161–169, 1998.[9] R. Schmidt, “Multiple emitter location and signal parameterestimation,”
IEEE Transactions on Antennas and Propagation ,vol. 34, no. 3, pp. 276–280, 1986.[10] H. L. Van Trees, “Optimum Array Processing: Part IV of De-tection, Estimation, and Modulation Theory,”
John Wiley &Sons, Ltd , p. 178, 2002.[11] Alex B. Gershman, Michael R¨ubsamen, and Marius Pesavento,“One- and two-dimensional direction-of-arrival estimation: Anoverview of search-free techniques,”
Signal Processing , vol.90, no. 5, pp. 1338 – 1349, 2010, Special Section on StatisticalSignal & Array Processing.[12] J. Liu and X. Liu, “An Eigenvector-Based Approach for Mul-tidimensional Frequency Estimation With Improved Identifia-bility,”
IEEE Transactions on Signal Processing , vol. 54, no.12, pp. 4543–4556, 2006.[13] M. Sørensen and L. De Lathauwer, “Multidimensional har-monic retrieval via coupled canonical polyadic decomposi-tion—part i: Model and identifiability,”
IEEE Transactionson Signal Processing , vol. 65, no. 2, pp. 517–527, 2017.[14] Tao Jiang, N. D. Sidiropoulos, and J. M. F. ten Berge, “Almost-sure identifiability of multidimensional harmonic retrieval,”
IEEE Transactions on Signal Processing , vol. 49, no. 9, pp.1849–1859, 2001. [15] Y. Hua, “A pencil-MUSIC algorithm for finding two-dimensional angles and polarizations using crossed dipoles,”
IEEE Transactions on Antennas and Propagation , vol. 41, no.3, pp. 370–376, 1993.[16] E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty prin-ciples: exact signal reconstruction from highly incomplete fre-quency information,”
IEEE Transactions on Information The-ory , vol. 52, no. 2, pp. 489–509, 2006.[17] D. L. Donoho, “Compressed sensing,”
IEEE Transactions onInformation Theory , vol. 52, no. 4, pp. 1289–1306, 2006.[18] D. P. Wipf and B. D. Rao, “An Empirical Bayesian Strategyfor Solving the Simultaneous Sparse Approximation Problem,”
IEEE Transactions on Signal Processing , vol. 55, no. 7, pp.3704–3716, 2007.[19] Michael E. Tipping, “Sparse Bayesian Learning and the Rel-evance Vector Machine,”
J. Mach. Learn. Res. , vol. 1, pp.211–244, Sept. 2001.[20] Z. Zhang and B. D. Rao, “Sparse Signal Recovery WithTemporally Correlated Source Vectors Using Sparse BayesianLearning,”
IEEE Journal of Selected Topics in Signal Process-ing , vol. 5, no. 5, pp. 912–926, 2011.[21] D. B. Rao and S. Kung, “A state space approach for the 2-dharmonic retrieval problem,” in
ICASSP ’84. IEEE Interna-tional Conference on Acoustics, Speech, and Signal Process-ing , 1984, vol. 9, pp. 174–176.[22] R. R. Pote and B. D. Rao, “Robustness of Sparse BayesianLearning in Correlated Environments,” in