Some results on the Weiss-Weinstein bound for conditional and unconditional signal models in array processing
aa r X i v : . [ c s . I T ] N ov Some results on the Weiss-Weinstein bound forconditional and unconditional signal models inarray processing
Dinh Thang VU, Alexandre RENAUX, R´emy BOYER, Sylvie MARCOS
Abstract
In this paper, the Weiss-Weinstein bound is analyzed in the context of sources localization with a planar arrayof sensors. Both conditional and unconditional source signal models are studied. First, some results are given in themultiple sources context without specifying the structure of the steering matrix and of the noise covariance matrix.Moreover, the case of an uniform or Gaussian prior are analyzed. Second, these results are applied to the particularcase of a single source for two kinds of array geometries: a non-uniform linear array (elevation only) and an arbitraryplanar (azimuth and elevation) array.
Index Terms
Weiss-Weinstein bound, DOA estimation, array processing.
I. I
NTRODUCTION
Sources localization problem has been widely investigated in the literature with many applications such as radar,sonar, medical imaging, etc. One of the objective is to estimate the direction-of-arrival (DOA) of the sources usingan array of sensors.In array processing, lower bounds on the mean square error are usually used as a benchmark to evaluatethe ultimate performance of an estimator. There exist several lower bounds in the literature. Depending on theassumptions about the parameters of interest, there are three main kinds of lower bounds. When the parametersare assumed to be deterministic (unknown), the main lower bounds on the (local) mean square error used are thewell known Cram´er-Rao bound [2] and the Barankin bound [3] (more particularly their approximations [4] [5] [6][7] [8]). When the parameters are assumed to be random with a known prior distribution, these lower bounds onthe global mean square error are called Bayesian bounds [9]. Some typical families of Bayesian bounds are the
The authors are with Universit´e Paris-Sud 11, CNRS Laboratoire des Signaux et Syst`emes, Supelec, 3 rue Joliot Curie, 91192 Gif-sur-YvetteCedex, France (e-mail: { Vu,Renaux,Remy.Boyer,Marcos } @lss.supelec.fr)This project was funded by both r´egion ˆIle de France and Digiteo Research Park. Section V-B2 of this paper has been partially presented in[1]. November 29, 2012 DRAFT
Ziv-Zakai family [10] [11] [12] and the Weiss-Weinstein family [13] [14] [15] [16]. Finally, when the parametervector is made from both deterministic and random parameters, the so-called hybrid bounds have been developed[17] [18] [19] [20].Since the DOA estimation is a non-linear problem, the outliers effect can appear and the estimators mean squareerror exhibits three distinct behaviors depending on the number of snapshots and/or on the signal to noise ratio(SNR)[21]. At high SNR and/or for a high number of snapshots, i.e. , in the asymptotic region, the outliers effect canbe neglected and the ultimate performance are described by the (classical/Bayesian/hybrid) Cram´er-Rao bound.However, when the SNR and/or the number of snapshots decrease, the outliers effect lead to a quick increase ofthe mean square error: this is the so-called threshold effect. In this region, the behavior of the lower bounds arenot the same. Some bounds, generally called global bounds (Barankin, Ziv-Zakai, Weiss-Weinstein) can predict thethreshold while the others, called local bounds, like the Cram´er-Rao bound or the Bhattacharyya bound cannot.Finally, at low SNR and/or at low number of snapshots, i.e. , in the no-information region, the deterministic boundsexceed the estimator mean square error due to the fact that they do not take into account the parameter support.On the contrary, the Bayesian bounds exploit the parameter prior information leading to a ”real” lower bound onthe global mean square error.In this paper, we are interested in the Weiss-Weinstein bounds which is known to be one of the tightest Bayesianbound with the bounds of the Ziv-Zakai family. We will study the two main source models used in the literature [22]:the unconditional (or stochastic) model where the source signals are assumed to be Gaussian and the conditional (ordeterministic) model where the source signals are assumed to be deterministic. Surprisingly, in the context of arrayprocessing, while closed-form expressions of the Ziv-Zakai bound (more precisizely its extension by Bell et. al.[23]) were proposed around 15 years ago for the unconditional model, the results concerning the Weiss-Weinsteinbound are, most of the time, only conducted by way of computations. Concerning the unconditional model, in [24],the Weiss-Weinstein bound has been evaluated by way of computations and has been compared to the mean squareerror of the MUSIC algorithm and classical Beamforming using a particular × element array antenna. In [25],the authors have introduced a numerical comparison between the Bayesian Cram´er-Rao bound, the Ziv-Zakai boundand the Weiss-Weinstein bound for DOA estimation. In [26], numerical computations of the Weiss-Weinstein boundto optimize sensor positions for non-uniform linear arrays have been presented. Again in the unconditional modelcontext, in [27], by considering the matched-field estimation problem, the authors have derived a semi closed-formexpression of a simplified version of the Weiss-Weinstein bound for the DOA estimation. Indeed, the integrationover the prior probability density function was not performed. The conditional model (with known waveforms) isstudied only in [28], where a closed-form expression of the WWB is given in the simple case of spectral analysisand in [1] which is a simplified version of the bound.While the primary goal of this paper is to give closed-form expressions of the Weiss-Weinstein bound for the DOAestimation of a single source with an arbitrary planar array of sensors, under both conditional and unconditionalsource signal models, we also provide partial closed-form expressions of the bound which could be useful forother problems. First, we study the general Gaussian observation model with parameterized mean or parameterized November 29, 2012 DRAFT covariance matrix. Indeed, one of the success of the Cram´er-Rao is that, for this observation model, a closed-formexpression of the Fisher information matrix is available: this is the so-called Slepian-Bang formula [29]. Such kindof formulas have been less investigated in the context of bounds tighter than the Cram´er-rao bound. Second, someresults are given in the multiple sources context without specifying the structure of the steering matrix and of thenoise covariance matrix. Finally, these results are applied to the particular case of a single source for two kindsof array geometries: the non-uniform linear array (elevation only) and the planar (azimuth and elevation) array.Consequently, the aim of this paper is also to provide a textbook of formulas which could be applied in other fields.The Weiss-Weinstein bound is known to depend on parameters called test points and other parameters generallydenoted s i . One particularity of this paper in comparison with the previous works on the Weiss-Weinstein boundis that we do not use the assumption s i = 1 / , ∀ i .This paper is organized as follows. Section II is devoted to the array processing observation model which will beused in the paper. In Section III, a short background on the Weiss-Weinstein bound is presented and two generalclosed-form expressions which will be the cornerstone for our array processing problems are derived. In SectionIV we apply these general results to the array processing problem without specifying the structure of the steeringmatrix. In Section V, we study the particular case of the non-uniform linear array and of the planar array for whichwe provide both closed-form expressions of the bound. Some simulation results are proposed in Section VI. Finally,Section VII gives our conclusions. II. P ROBLEM SETUP
In this section, the general observation model generally used in array signal processing is presented as well as thefirst different assumptions used in the remain of the paper. Particularly, the so-called conditional and unconditionalsource models are emphasized.
A. Observations model
We consider the classical scenario of an array with M sensors which receives N complex bandpass signals s ( t ) = [ s ( t ) s ( t ) · · · s N ( t )] T . The output of the array is a M × complex vector y ( t ) which can be modelledas follows (see, e.g. , [30] or [22]) y ( t ) = A ( θ ) s ( t ) + n ( t ) , t = 1 , . . . , T, (1)where T is the number of snapshots, where θ = [ θ θ · · · θ q ] T is an unknown parameter vector of interest , where A ( θ ) is the so-called M × N steering matrix of the array response to the sources, and where the M × randomvector n ( t ) is an additive noise. Note that one source can be described by several parameters. Consequently, q > N in general.
November 29, 2012 DRAFT
B. Assumptions • The unknown parameters of interest are assumed to be random with an a priori probability density function p ( θ i ) , i = 1 , . . . , q . These random parameters are assumed to be statistically independent such that the apriori joint probability density function is p ( θ ) = q Y i =1 p ( θ i ) . We also assume that the parameter space, denoted Θ , is a connected subset of R q (see [31]). • The noise vector is assumed to be complex Gaussian, statistically independent of the parameters, i.i.d., circular,with zero mean and known covariance matrix E (cid:2) n ( t ) n H ( t ) (cid:3) = R n . This assumption will be made morerestrictive in Section V where it will be assumed that R n = σ n I . In any case, R n is assumed to be a fullrank matrix. • The steering matrix A ( θ ) is assumed such that the observation model is identifiable. From Section III toSection IV, the structure of A ( θ ) is not specified in order to obtain the more general results. • Concerning the source signals, two kinds of models have been investigated in the literature (see, e.g. , [32] or[22]) and will be alternatively used in this paper. – M : Unconditional or stochastic model : s ( t ) is assumed to be a complex circular random vector, i.i.d., sta-tistically independent of the noise, Gaussian with zero-mean and known covariance matrix E (cid:2) s ( t ) s H ( t ) (cid:3) = R s . Note that concerning the previous results on the Cram´er-Rao bound available in the literature [32],the covariance matrix R s is assumed to be unknown. In this paper, we have made the simpler assumptionthat the covariance matrix R s is known. These assumptions have already been used for the calculation ofbounds more complex than the Cram´er-Rao bound (see, e.g. , [27], [33], [34]). – M : Conditional or deterministic model : ∀ t , s ( t ) is assumed to be deterministic known. Note that, underthe conditional model assumption, the signal waveforms can be assumed either unknown or known. Whilethe conditional observation model with unknown waveforms seems more challenging, the conditionalmodel with known waveforms signals which will be used in this paper can be found in several applicationssuch as in mobile telecommunication and radar (see e.g. [35], [36], [37], [38], and [39]). C. Likelihood of the observations
Let R y = E (cid:2) y ( t ) y H ( t ) (cid:3) be the covariance matrix of the observation vector y ( t ) . According to the aforemen-tioned assumptions, it is easy to see that under M , the observations y ( t ) are distributed as a complex circularGaussian random vector with zero mean and covariance matrix R y ( θ ) = A ( θ ) R s A H ( θ ) + R n while under M ,the observations y ( t ) are distributed as a complex circular Gaussian random vector with mean A ( θ ) s ( t ) andcovariance matrix R y = R n . Moreover, in both case the observations are i.i.d..Therefore, the likelihood, p ( Y ; θ ) , of the full observations matrix Y = [ y (1) y (2) . . . y ( T )] under M isgiven by p ( Y ; θ ) = 1 π MT | R y ( θ ) | T exp − T X t =1 y ( t ) H R − y ( θ ) y ( t ) ! , (2) November 29, 2012 DRAFT where R y ( θ ) = A ( θ ) R s A H ( θ ) + R n and the likelihood under M is given by p ( Y ; θ ) = 1 π MT | R n | T exp − T X t =1 ( y ( t ) − A ( θ ) s ( t )) H R − n ( y ( t ) − A ( θ ) s ( t )) ! . (3)III. W EISS -W EINSTEIN BOUND : G
ENERALITIES
In this Section, we first remind to the reader the structure of the Weiss-Weinstein bound on the mean squareerror and the assumptions used to compute this bound. Second, a general result about the Gaussian observationmodel with parameterized mean or parameterized covariance matrix, which, to the best of our knowledge, does notappear in the literature is presented. This result will be useful to study both the unconditional model M and theconditional model M in the next Section. A. Background
The Weiss-Weinstein bound for a q × real parameter vector θ is a q × q matrix denoted WWB and is givenas follows [40]
WWB = HG − H T , (4)where the q × q matrix H = [ h h . . . h q ] contains the so-called test-points h i , i = 1 , . . . , q such that θ + h i ∈ Θ ∀ h i . The k, l − element of the q × q matrix G is given by { G } k,l = E (cid:2)(cid:0) L s k ( Y ; θ + h k , θ ) − L − s k ( Y ; θ − h k , θ ) (cid:1) (cid:0) L s l ( Y ; θ + h l , θ ) − L − s l ( Y ; θ − h l , θ ) (cid:1)(cid:3) E [ L s k ( Y ; θ + h k , θ )] E [ L s l ( Y ; θ + h l , θ )] , (5)where the expectations are taken over the joint probability density function p ( Y , θ ) and where the function L ( Y ; θ + h i , θ ) is defined by L ( Y ; θ + h i , θ ) = p ( Y , θ + h i ) p ( Y , θ ) . The elements s i are such that s i ∈ [0 , , i = 1 , . . . , q .Note that we have the following order relation [40] Cov (cid:16) ˆ θ (cid:17) = E (cid:20)(cid:16) ˆ θ − θ (cid:17) (cid:16) ˆ θ − θ (cid:17) T (cid:21) (cid:23) WWB , (6)where A (cid:23) B means that the matrix A − B is a semi-positive definite matrix and where Cov (cid:16) ˆ θ (cid:17) is the global(the expectation is taken over the joint pdf p ( Y , θ ) ) mean square error of any estimator ˆ θ of the parameter vector θ . Finally, in order to obtain a tight bound, one has to maximize WWB over the test-points h i and s i i = 1 , . . . , q. Note that this maximization can be done by using the trace of HG − H T or with respect to the Loewner partialordering [41]. In this paper we will use the trace of HG − H T which is enough to obtain tight results. B. A general result on the Weiss-Weinstein bound and its application to the Gaussian observation models
An analytical result on the Weiss-Weinstein bound which will be useful in the following derivations and whichcould be useful for other problems is derived in this part. Note that this result is independent of the parametervector size q and of the considered observation model. November 29, 2012 DRAFT
Let us denote Ω the observation space. By rewriting the elements of matrix G (see Eqn. (5)) involved in theWeiss-Weinstein bound, one obtains for the numerator denoted N { G } k,l ,N { G } k,l = E (cid:2)(cid:0) L s k ( Y ; θ + h k , θ ) − L − s k ( Y ; θ − h k , θ ) (cid:1) (cid:0) L s l ( Y ; θ + h l , θ ) − L − s l ( Y ; θ − h l , θ ) (cid:1)(cid:3) = Z Θ Z Ω p s k ( Y , θ + h k ) p s l ( Y , θ + h l ) p s k + s l − ( Y , θ ) d Y d θ + Z Θ Z Ω p − s k ( Y , θ − h k ) p − s l ( Y , θ − h l ) p − s k − s l ( Y , θ ) d Y d θ − Z Θ Z Ω p s k ( Y , θ + h k ) p − s l ( Y , θ − h l ) p s k − s l ( Y , θ ) d Y d θ − Z Θ Z Ω p − s k ( Y , θ − h k ) p s l ( Y , θ + h l ) p s l − s k ( Y , θ ) d Y d θ , (7)and for the denominator denoted D { G } k,l ,D { G } k,l = E [ L s k ( Y ; θ + h k , θ )] E [ L s l ( Y ; θ + h l , θ )]= Z Θ Z Ω p s k ( Y , θ + h k ) p s k − ( Y , θ ) d Y d θ Z Θ Z Ω p s l ( Y , θ + h l ) p s l − ( Y , θ ) d Y d θ . (8)Let us now define a function η ( α, β, u , v ) as η ( α, β, u , v ) = Z Θ Z Ω p α ( Y , θ + u ) p β ( Y , θ + v ) p α + β − ( Y , θ ) d Y d θ , (9)where ( α, β ) ∈ [0 , and where ( u , v ) are two q × vectors such that θ + u ∈ Θ and θ + v ∈ Θ . By identification,it is easy to see that { G } k,l = η ( s k , s l , h k , h l ) + η (1 − s k , − s l , − h k , − h l ) − η ( s k , − s l , h k , − h l ) − η (1 − s k , s l , − h k , h l ) η ( s k , , h k , ) η (0 , s l , , h l ) . (10)Note that we choose the arbitrary notation D { G } k,l = η ( s k , , h k , ) η (0 , s l , , h l ) for the denominator. Thenotation D { G } k,l = η ( s k , , h k , ) η (1 , s l , , h l ) or, even, D { G } k,l = η ( s k , , h k , v ) η (0 , s l , u , h l ) will lead to thesame result.With Eqn. (10), it is clear that the knowledge of η ( α, β, u , v ) for a particular problem leads to the Weiss-Weinsteinbound (without the maximization procedure over the test-points and over the parameters s i ). Surprisingly, this simpleexpression is given in [40] only for s i = , ∀ i and not for the general case.Let us now detail this function η ( α, β, u , v ) . The function η ( α, β, u , v ) can be rewritten as η ( α, β, u , v ) = Z Θ p α ( θ + u ) p β ( θ + v ) p α + β − ( θ ) Z Ω p α ( Y ; θ + u ) p β ( Y ; θ + v ) p α + β − ( Y ; θ ) d Y d θ = Z Θ ´ η θ ( α, β, u , v ) p α ( θ + u ) p β ( θ + v ) p α + β − ( θ ) d θ , (11)where we define ´ η θ ( α, β, u , v , θ ) = Z Ω p α ( Y ; θ + u ) p β ( Y ; θ + v ) p α + β − ( Y ; θ ) d Y . (12)Our aim is to give the most general result. Consequently, we will focus only on ´ η θ ( α, β, u , v ) since the a priori probability density function depends on the considered problem.An important remark pointed out in [31] is that the integration for the parameter space is with respect tothe region { θ : p ( θ ) > } . However, since the functions being integrated are p ( θ ) , p ( θ + u ) , and p ( θ + v ) , November 29, 2012 DRAFT then the actual region of integration (where all the functions are positive) is the intersection of three regions, { θ : p ( θ ) > } ∩ { θ : p ( θ + u ) > } ∩ { θ : p ( θ + v ) > } . Note that, in order to simplify the notation we onlyuse Θ throughout this paper but this remark will be useful and explictely specified in Section IV-B.
1) Gaussian observation model with parameterized covariance matrix:
One calls (circular, i.i.d.) Gaussianobservation model with parameterized covariance matrix, a model such that the observations y ( t ) ∼ CN ( , R y ( θ )) where θ are the parameters of interest. Note that M is a special case of this model since the parameters ofinterest appear only in the covariance matrix of the observations which has the following particular structure R y ( θ ) = A ( θ ) R s A H ( θ ) + R n . The closed-form expression of ´ η θ ( α, β, u , v ) is given by: ´ η θ ( α, β, u , v ) = | R y ( θ ) | T ( α + β − | R y ( θ + u ) | T α | R y ( θ + v ) | T β (cid:12)(cid:12) α R − y ( θ + u ) + β R − y ( θ + v ) − ( α + β − R − y ( θ ) (cid:12)(cid:12) T . (13)The proof is given in Appendix A. Note that, similar expressions are given in [23] (Eqn. (B.15)) and [42] (p. 67,Eqn. (52)) for the particular case where α = s and β = 1 − s.
2) Gaussian observation model with parameterized mean:
One calls (circular, i.i.d.) Gaussian observation modelwith parameterized mean, a model such that the observations y ( t ) ∼ CN ( f ( θ ) , R y ) where θ are the parametersof interest. Note that M is a special case of this model since the parameters of interest appear only in the mean ofthe observations which has the following particular structure f t ( θ ) = A ( θ ) s ( t ) (and R y = R n ). The closed-formexpression of ´ η θ ( α, β, u , v ) is given in this case by ln ´ η θ ( α, β, u , v ) = − T X t =1 α (1 − α ) f Ht ( θ + u ) R − y f t ( θ + u ) + β (1 − β ) f Ht ( θ + v ) R − y f t ( θ + v )+ (1 − α − β ) ( α + β ) f Ht ( θ ) R − y f t ( θ ) − (cid:8) αβ f Ht ( θ + u ) R − y f t ( θ + v )+ α (1 − α − β ) f Ht ( θ + u ) R − y f t ( θ ) + β (1 − α − β ) f Ht ( θ + v ) R − y f t ( θ ) (cid:9) , (14)or equivalently by ln ´ η θ ( α, β, u , v ) = − T X t =1 α (1 − α − β ) (cid:13)(cid:13)(cid:13) R − / y ( f t ( θ + u ) − f t ( θ )) (cid:13)(cid:13)(cid:13) + αβ (cid:13)(cid:13)(cid:13) R − / y ( f t ( θ + u ) − f t ( θ + v )) (cid:13)(cid:13)(cid:13) + β (1 − α − β ) (cid:13)(cid:13)(cid:13) R − / y ( f t ( θ + v ) − f t ( θ )) (cid:13)(cid:13)(cid:13) . (15)The details are given in Appendix B.IV. G ENERAL APPLICATION TO ARRAY PROCESSING
In the previous Section, it has been shown that the Weiss-Weinstein bound computation (or, at least, the matrix G computation) is reduced to the knowledge of the function η ( α, β, u , v ) given by Eqn. (9). As one can see in Eqn.(10), the elements of the matrix G depend on η ( α, β, u , v ) for particular values of α, β, u , and v . Consequently,the goal of this Section is to detail these particular functions for our model given by Eqn. (1). Since Eqn. (9) canbe decomposed into a deterministic part (in the sense where ´ η θ ( α, β, u , v ) (see Eqn. (12)) only depends on thelikelihood function) and a Bayesian part (when we have to integrate ´ η θ ( α, β, u , v ) over the a priori probabilitydensity function of the parameters), we will first focus on the particular functions ´ η θ ( α, β, u , v ) by using the results November 29, 2012 DRAFT of the previous Section on the Gaussian observation model with parameterized mean or covariance matrix. Second,we will detail the passage from ´ η θ ( α, β, u , v ) to η ( α, β, u , v ) in the particular case where p ( θ i ) is a uniformprobability density function ∀ i . Another result will also be given in the case of a Gaussian prior. A. Analysis of ´ η θ ( α, β, u , v ) We will now detail the particular functions ´ η θ ( α, β, u , v ) involved in the different elements of { G } k,l , k, l ∈{ , q } for both models M and M .
1) Unconditional observation model M : Under the unconditional model M , by using Eqn. (13), one obtainsstraightforwardly the functions ´ η θ ( α, β, u , v ) involved in the elements { G } k,l = { G } l,k ´ η θ ( s k , s l , h k , h l ) = | R y ( θ ) | T ( sk + sl − ) | R y ( θ + h k ) | Tsk | R y ( θ + h l ) | Tsl | s k R − y ( θ + h k )+ s l R − y ( θ + h l ) − ( s k + s l − R − y ( θ ) | T , ´ η θ (1 − s k , − s l , − h k , − h l ) = | R y ( θ ) | T ( − sk − sl ) | R y ( θ − h k ) | T ( sk − ) | R y ( θ − h l ) | T ( sl − ) | (1 − s k ) R − y ( θ − h k )+(1 − s l ) R − y ( θ − h l ) − (1 − s k − s l ) R − y ( θ ) | T , ´ η θ ( s k , − s l , h k , − h l ) = | R y ( θ ) | T ( sk − sl ) | R y ( θ − h l ) | T ( sl − ) | R y ( θ + h k ) | Tsk | s k R − y ( θ + h k )+(1 − s l ) R − y ( θ − h l ) − ( s k − s l ) R − y ( θ ) | T , ´ η θ (1 − s k , s l , − h k , h l ) = | R y ( θ ) | T ( sl − sk ) | R y ( θ − h k ) | T ( sk − ) | R y ( θ + h l ) | Tsl | (1 − s k ) R − y ( θ − h k )+ s l R − y ( θ + h l ) − ( s l − s k ) R − y ( θ ) | T , ´ η θ ( s k , , h k , ) = | R y ( θ ) | T ( sk − ) | R y ( θ + h k ) | Tsk | s k R − y ( θ + h k ) − ( s k − R − y ( θ ) | T , ´ η θ (0 , s l , , h l ) = | R y ( θ ) | T ( sl − ) | R y ( θ + h l ) | Tsl | s l R − y ( θ + h l ) − ( s l − R − y ( θ ) | T . (16)The diagonal elements of G are obtained by letting k = l in the above equations.
2) Conditional observation model M : Under the conditional model M , by using Eqn. (15) with f t ( θ ) = A ( θ ) s ( t ) and R y = R n one obtains straightforwardly the functions ´ η θ ( α, β, u , v ) involved in the elements { G } k,l = { G } l,k ln ´ η θ ( s k , s l , h k , h l ) = s k ( s k + s l − ζ θ ( h k , ) + s l ( s k + s l − ζ θ ( h l , ) − s k s l ζ θ ( h k , h l ) , ln ´ η θ (1 − s k , − s l , − h k , − h l ) = ( s k −
1) ( s k + s l − ζ θ ( − h k , ) + ( s l −
1) ( s k + s l − ζ θ ( − h l , ) − (1 − s k ) (1 − s l ) ζ θ ( − h k , − h l ) , ln ´ η θ ( s k , − s l , h k , − h l ) = s k ( s k − s l ) ζ θ ( h k , ) + (1 − s l ) ( s k − s l ) ζ θ ( − h l , ) + s k ( s l − ζ θ ( h k , − h l ) , ln ´ η θ (1 − s k , s l , − h k , h l ) = ( s k −
1) ( s k − s l ) ζ θ ( − h k , ) + s l ( s l − s k ) ζ θ ( h l , ) + ( s k − s l ζ θ ( − h k , h l ) , ln ´ η θ ( s k , , h k , ) = s k ( s k − ζ θ ( h k , ) , ln ´ η θ (0 , s l , , h l ) = s l ( s l − ζ θ ( h l , ) , (17)where we define ζ θ ( µ , ρ ) = T X t =1 (cid:13)(cid:13)(cid:13) R − / n ( A ( θ + µ ) − A ( θ + ρ )) s ( t ) (cid:13)(cid:13)(cid:13) . (18)The diagonal elements of G are obtained by letting k = l in the above equations. Note that, since we are workingon matrix G , all the previously proposed results are made whatever the number of test-points. November 29, 2012 DRAFT
B. Analysis of η ( α, β, u , v ) with a uniform prior Of course, the analysis of η ( α, β, u , v ) given by Eqn. (11) can only be conducted by specifying the a priori probability density functions of the parameters. Consequently, the results provided here are very specific. However,note that, in general, this aspect is less emphasized in the literature where most of the authors give results withoutspecifying the prior probability density functions and compute the rest of the bound numerically (see e.g., [27] [25][43]).We assume that all the parameters θ i have a uniform prior distribution over the interval [ a i , b i ] and are statisticallyindependent. We will also assume one test-point per parameter, otherwise there is no possibility to obtain (pseudo)closed-form expressions. Consequently, the matrix H is such that H = Diag ([ h h · · · h q ]) , (19)and the vector h i , i = 1 , . . . , q , takes the value h i at the i th row and zero elsewhere. So, in this analysis, the vector u takes the value u i at the i th row and zero elsewhere and the vector v takes the value v j at the j th row and zeroelsewhere (of course, we can have i = j ). Under these assumptions, η ( α, β, u , v ) can be rewritten for i = jη ( α, β, u , v ) = Z Θ ´ η θ ( α, β, u , v ) p α ( θ i + u i ) p β ( θ j + v j ) p β ( θ i ) p α ( θ j ) p α + β − ( θ i ) p α + β − ( θ j ) q Y k =1 k = i,k = j p ( θ k ) d θ = 1 q Y k =1 ( b k − a k ) Z Θ q − Z Θ j Z Θ i ´ η θ ( α, β, u , v ) dθ i dθ j d ( θ / { θ i , θ j } ) , (20)where Θ i = [ a i , b i − u i ] if u i > , [ a i − u i , b i ] if u i < , and Θ j = [ a j , b j − v j ] if v j > , [ a j − v j , b j ] if v j < , . For i = j, one can have v = ± u , then one obtains η ( α, β, u , v = ± u ) = Z Θ ´ η θ ( α, β, u , v ) p α ( θ i + u i ) p β ( θ i ± u i ) p α + β − ( θ i ) q Y k =1 k = i p ( θ k ) d θ = 1 q Y k =1 ( b k − a k ) Z Θ q − Z Θ i ´ η θ ( α, β, u , v = ± u ) dθ i d ( θ / { θ i } ) . (21)In the last equation, if v = − u , then Θ i = [ a i + u i , b i − u i ] if u i > , [ a i − u i , b i + u i ] if u i < , , while, if v = u , then Θ i = [ a i , b i − u i ] if u i > , [ a i − u i , b i ] if u i < , . Depending on the structure of ´ η θ ( α, β, u , v ) , η ( α, β, u , v ) has to be computed numerically or a closed-formexpression can be found. In this case, one has to have a particular attention to the integration domain as mentionned in Section III-B. It will not be the case for theGaussian prior since the support is R . November 29, 2012 DRAFT0
Another particular case which appears sometimes is when the function ´ η θ ( α, β, u , v ) does not depend on θ (see, [28] [9] [12] [23] [25] [26] [31] [33] and Section V of this paper). In this case, ´ η θ ( α, β, u , v ) is denoted ´ η ( α, β, u , v ) and one obtains from Eqn. (20) η ( α, β, u , v ) = ´ η ( α, β, u , v ) q Y k =1 ( b k − a k ) q Y k =1 k = i,k = j Z b k a k dθ k Z Θ i dθ i Z Θ j dθ j = ( b i − a i − | u i | ) ( b j − a j − | v j | )( b i − a i ) ( b j − a j ) ´ η ( α, β, u , v ) , (22)and from Eqn. (21) η ( α, β, u , v = u ) = ( b i − a i − | u i | )( b i − a i ) ´ η ( α, β, u , v ) , (23)and η ( α, β, u , v = − u ) = ( b i − a i − | u i | )( b i − a i ) ´ η ( α, β, u , v ) . (24) C. Analysis of η ( α, β, u , v ) with a Gaussian prior Finally, one can mention that if the prior is now assumed to be Gaussian, i.e. , θ i ∼ N (cid:0) µ i , σ i (cid:1) ∀ i and ´ η θ ( α, β, u , v ) does not depend on θ one obtains after a straightforward calculation η ( α, β, u , v ) = ´ η ( α, β, u , v ) Z R p α ( θ i + u i ) p α − ( θ i ) dθ i Z R p β ( θ j + v j ) p β − ( θ j ) dθ j = ´ η ( α, β, u , v ) exp − α (1 − α ) u i σ i + β (1 − β ) v j σ j !! , (25) η ( α, β, u , v = u ) = ´ η ( α, β, u , v ) Z R p α + β ( θ i + u i ) p α + β − ( θ i ) dθ i = ´ η ( α, β, u , v ) exp (cid:18) − ( α + β ) (1 − α − β ) u i σ i (cid:19) , (26)and η ( α, β, u , v = − u ) = ´ η ( α, β, u , v ) Z R p α ( θ i + u i ) p β ( θ i − u i ) p α + β − ( θ i ) dθ i = ´ η ( α, β, u , v ) exp − (cid:0) α + β − α − β + 2 αβ (cid:1) u i σ i ! . (27)V. S PECIFIC APPLICATIONS TO ARRAY PROCESSING : DOA
ESTIMATION
We now consider the application of the Weiss-Weinstein bound in the particular context of source localization.Indeed, until now, the structure of the steering matrix A ( θ ) for a particular problem has not been used in theproposed (semi) closed-form expressions. Consequently, these previous results can be applied to a large class ofestimation problems such as far-field and near-field sources localization, passive localization with polarized arrayof sensors, or radar (known waveforms). November 29, 2012 DRAFT1
Here, we want to focus on the direction-of-arrival estimation of a single source in the far-field area with narrow-band signal. In this case, the steering matrix A ( θ ) becomes a steering vector denoted a ( θ ) (except for onepreliminary result concerning the conditional model which will be given whatever the number of sources in SectionV-A2). The structure of this vector will be specified by the analysis of two kinds of array geometry: the non-uniformlinear array from which only one angle-of-arrival can be estimated ( θ becomes a scalar) and the arbitrary planararray from which both azimuth and elevation can be estimated ( θ becomes a × vector). In any cases, the arrayalways consists of M identical, omnidirectional sensors. Both model M and M will be considered and the noisewill be assumed spatially uncorrelated: R n = σ n I . Since we focus on the single source scenario, the variance ofthe source signal s ( t ) is denoted σ s for the model M .The general structure of the i th element of the steering vector is as follows { a ( θ ) } i = exp (cid:18) j πλ r Ti θ (cid:19) , i = 1 , . . . , M (28)where θ represents the parameter vector, where λ denotes the wavelength, and where r i denotes the coordinate ofthe i th sensor position with respect to a given referential. In the following, r i will be a scalar or a × vectordepending on the context (linear array or planar array). A. Preliminary results
Since our analysis is now reduced to the single source case, we give here some other closed-form expressionswhich will be useful when we will detail the specific linear and planar arrays.
1) Unconditional observation model M : In order to detail the set of functions ´ η θ given by Eqn. (16), onehas to find closed-form expressions of the determinant | R y ( θ + u ) | and of determinants having the followingstructure: (cid:12)(cid:12) m R − y ( θ ) + m R − y ( θ ) (cid:12)(cid:12) with m + m = 1 or (cid:12)(cid:12) m R − y ( θ ) + m R − y ( θ ) + m R − y ( θ ) (cid:12)(cid:12) with m + m + m = 1 . Under M , the observation covariance matrix is now given by R y ( θ ) = σ s a ( θ ) a H ( θ ) + σ n I M . (29)Concerning the calculation of | R y ( θ + u ) | , it is easy to find | R y ( θ + u ) | = σ Mn (cid:18) σ s σ n k a ( θ + u ) k (cid:19) . (30)Moreover, after calculation detailed in Appendix C, one obtains for the other determinants (cid:12)(cid:12) m R − y ( θ ) + m R − y ( θ ) (cid:12)(cid:12) = 1( σ n ) M (cid:16) − ϕ m k a ( θ ) k + m ϕ k a ( θ ) k − ϕ m ϕ m (cid:16)(cid:13)(cid:13) a H ( θ ) a ( θ ) (cid:13)(cid:13) − k a ( θ ) k k a ( θ ) k (cid:17)(cid:17) (31) November 29, 2012 DRAFT2 and (cid:12)(cid:12) m R − y ( θ ) + m R − y ( θ ) + m R − y ( θ ) (cid:12)(cid:12) =1( σ n ) M − X k =1 m k ϕ k k a ( θ k ) k − X k =1 3 X k ′ =1 k ′6 = k m k ϕ k m k ′ ϕ k ′ (cid:16)(cid:13)(cid:13) a H ( θ k ) a ( θ k ′ ) (cid:13)(cid:13) − k a ( θ k ) k k a ( θ k ′ ) k (cid:17) − Y k =1 m k ϕ k ! Y k =1 k a ( θ k ) k − X k =1 3 X k ′ =1 k ′6 = k X k ′′ =1 k ′′6 = k ′6 = k (cid:13)(cid:13) a H ( θ k ) a ( θ k ′ ) (cid:13)(cid:13) k a ( θ k ′′ ) k + a H ( θ ) a ( θ ) a H ( θ ) a ( θ ) a H ( θ ) a ( θ ) + a H ( θ ) a ( θ ) a H ( θ ) a ( θ ) a H ( θ ) a ( θ ) (cid:1)(cid:1) , (32)where ϕ k = σ s σ s k a ( θ k ) k + σ n , k = 1 , , . (33)
2) Conditional observation model M : Note that the results proposed here are in the context of any number ofsources. Under the conditional model, the set of functions ´ η θ given by Eqn. (17) is linked to the function ζ θ ( µ , ρ ) given by Eqn. (18). In this analysis, the vector µ takes the value µ i at the i th row and zero elsewhere and thevector ρ takes the value ρ j at the j th row and zero elsewhere (of course, one can has i = j ). In Appendix D, thecalculation of the following closed-form expressions for ζ θ ( µ , ρ ) are detailed. • If ( m − p + 1 ≤ i, j ≤ mp, where p denotes the number of parameters per source, then, we have ζ θ ( µ , ρ ) = T X t =1 k{ s ( t ) } m k M X i =1 M X j =1 (cid:8) R − n (cid:9) i,j exp (cid:18) j πλ (cid:0) r Tj − r Ti (cid:1) θ m (cid:19) × (cid:18) exp (cid:18) − j πλ r Ti µ m (cid:19) − exp (cid:18) − j πλ r Ti ρ m (cid:19)(cid:19) (cid:18) exp (cid:18) j πλ r Tj µ m (cid:19) − exp (cid:18) j πλ r Tj ρ m (cid:19)(cid:19) . (34) • Otherwise, if ( m − p + 1 ≤ i ≤ mp and ( n − p + 1 ≤ j ≤ np , then we have ζ θ ( µ , ρ ) = T X t =1 k{ s ( t ) } m k M X i =1 M X j =1 (cid:8) R − n (cid:9) i,j exp (cid:18) j πλ (cid:0) r Tj − r Ti (cid:1) θ m (cid:19) exp (cid:18) − j πλ r Ti µ m (cid:19) exp (cid:18) j πλ r Tj µ m (cid:19) + T X t =1 k{ s ( t ) } n k M X i =1 M X j =1 (cid:8) R − n (cid:9) i,j exp (cid:18) j πλ (cid:0) r Tj − r Ti (cid:1) θ n (cid:19) exp (cid:18) − j πλ r Ti ρ n (cid:19) exp (cid:18) j πλ r Tj ρ n (cid:19) − T X t =1 { s ( t ) } ∗ m { s ( t ) } n × M X i =1 M X j =1 (cid:8) R − n (cid:9) i,j exp (cid:18) j πλ (cid:0) r Tj θ n − r Ti θ m (cid:1)(cid:19) exp (cid:18) − j πλ r Ti µ m (cid:19) exp (cid:18) j πλ r Tj ρ n (cid:19) . (35)In particular, if one assumes R n = σ n I , then, several simplifications can be done: • If ( m − p + 1 ≤ i, j ≤ mp, then ζ θ ( µ , ρ ) = 1 σ n M X i =1 (cid:13)(cid:13)(cid:13)(cid:13) exp (cid:18) − j πλ r Ti µ m (cid:19) − exp (cid:18) − j πλ r Ti ρ m (cid:19)(cid:13)(cid:13)(cid:13)(cid:13) T X t =1 k{ s ( t ) } m k , (36) November 29, 2012 DRAFT3
Fig. 1. 3D source localization using a planar array antenna. where we note that the function ζ θ ( µ , ρ ) does not depend on the parameter θ . • Otherwise, if ( m − p + 1 ≤ i ≤ mp and ( n − p + 1 ≤ j ≤ np , then ζ θ ( µ , ρ ) = 1 σ n M X i =1 (cid:13)(cid:13)(cid:13)(cid:13) exp (cid:18) − j πλ r Ti µ m (cid:19)(cid:13)(cid:13)(cid:13)(cid:13) T X t =1 k{ s ( t ) } m k + 1 σ n M X i =1 (cid:13)(cid:13)(cid:13)(cid:13) exp (cid:18) − j πλ r Ti ρ n (cid:19)(cid:13)(cid:13)(cid:13)(cid:13) T X t =1 k{ s ( t ) } n k − σ n M X i =1 exp (cid:18) j πλ r Ti ( θ n − θ m ) (cid:19) exp (cid:18) − j πλ r Ti µ m (cid:19) exp (cid:18) j πλ r Ti ρ n (cid:19) T X t =1 { s ( t ) } ∗ m { s ( t ) } n ! (37)It is clear that the proposed above formulas for both the unconditional and the conditional models can be appliedto any kind of array geometry and whatever the number of sources. However, they generally depend on the parametervector θ . This means that, in general, the calculation of the set of functions η will have to be performed numerically(except if one is able to find a closed-form expression of Eqn. (11)). In the following we present a kind of arraygeometry where, fortunately, the set of functions ´ η θ will not depend on θ leading to a straightforward calculationof the bound. B. 3D Source localization with a planar array
We first consider the problem of DOA estimation of a single narrow band source in the far field area by usingan arbitrary planar array. In fact, we start by this general setting because the non-uniform linear array is clearly aparticular case of this array. Without loss of generality, we assume that the sensors of this array lay on the xOy planwith Cartesian coordinates (see Fig. 1). Therefore, the vector r i contains the coordinate of the i th sensor positionwith respect to this referential, i.e. , r i = [ d x i d y i ] T , i = 1 , . . . , M . From (28), the steering vector is given by a ( θ ) = (cid:20) exp (cid:18) j πλ ( d x u + d y v ) (cid:19) . . . exp (cid:18) j πλ ( d x M u + d y M v ) (cid:19)(cid:21) T , (38)where, as in [23], the parameter vector of interest is θ = [ u v ] T where u = sin ϕ cos φ,v = sin ϕ sin φ, (39)and where ϕ and φ represent the elevation and azimuth angles of the source, respectively. The parameters spaceis such that u ∈ [ − , and v ∈ [ − , . Therefore, we assume that they both follow a uniform distribution over November 29, 2012 DRAFT4 [ − , . Note that from a physical point of view, it should be more tempting to choose a uniform prior for ϕ and φ . This will lead to a probability density functions for u and v not uniform. To the best of our knowledge,this assumption has only been used in the context of lower bounds in [25]. Unfortunately, such prior leads to anuntractable expression of the bound (see Eqn. (21) of [25]). Consequently, other authors have generally not specifiedthe prior leading to semi closed-form expressions of bounds ( i.e. that it remains a numerical integration to performover the parameters) [25] [43] [27]. On the other hand, in order to obtain a closed-form expression, authors havegenerally used a simplified assumption, i.e. a uniform prior directly on u and v (see, for example, [26] [44]) . Inthis paper, we have followed the same way by expecting a slight modification of performance with respect to amore physical model and in order to be able to get closed-form expressions of the bound.We choose the matrix of test points such that H = [ h u h v ] = h u h v . (40)Then, we have: θ + h u = [ u + h u v ] T and θ + h v = [ u v + h v ] T . Moreover, we now have two elements s i ∈ [0 , , i = 1 , for which we will prefer the notation s u and s v , respectively.
1) Unconditional observation model M : Under M , let us set U SNR = σ s σ n ( Mσ s + σ n ) . The closed-formexpressions of the elements of matrix G = { G } uu { G } uv { G } vu { G } vv are given by (see Appendix E for the proof): { G } uu = (cid:16) − | h u | (cid:17) s u (1 − s u ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T + (cid:16) − | h u | (cid:17) − s u )(2 s u − U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T − − | h u | ) s u (1 − s u ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T (cid:16) − | h u | (cid:17) s u (1 − s u ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T , (41) { G } vv = (cid:16) − | h v | (cid:17) s v (1 − s v ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T + (cid:16) − | h v | (cid:17) − s v )(2 s v − U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T − − | h v | ) s v (1 − s v ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T (cid:16) − | h v | (cid:17) s v (1 − s v ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T , (42) November 29, 2012 DRAFT5 { G } uv = − U SNR s u s v (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d xk h u − d yk h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! + s u (1 − s u − s v ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d xk h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! + s v (1 − s u − s v ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d yk h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! − s u s v (1 − s u − s v ) U SNRσ nσ s ×× M P k =1 exp (cid:16) j πdyk hvλ (cid:17) M P k =1 exp (cid:16) − j πdxk huλ (cid:17) M P k =1 exp (cid:18) j π ( dxk hu − dyk hv ) λ (cid:19) + M P k =1 exp (cid:16) − j πdyk hvλ (cid:17) M P k =1 exp (cid:16) j πdxk huλ (cid:17) M P k =1 exp (cid:18) − j π ( dxk hu − dyk hv ) λ (cid:19) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d yk h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d xk h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d xk h u − d yk h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) + M − T + − U SNR (1 − s u )(1 − s v ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) j πλ ( d xk h u − d yk h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! +(1 − s u )( s u + s v − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) j πλ d xk h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! +(1 − s v )( s u + s v − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) j πλ d yk h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! − (1 − s u )(1 − s v )( s u + s v − U SNRσ nσ s ×× M P k =1 exp (cid:16) j πdyk hvλ (cid:17) M P k =1 exp (cid:16) − j πdxk huλ (cid:17) M P k =1 exp (cid:18) j π ( dxk hu − dyk hv ) λ (cid:19) + M P k =1 exp (cid:16) − j πdyk hvλ (cid:17) M P k =1 exp (cid:16) j πdxk huλ (cid:17) M P k =1 exp (cid:18) − j π ( dxk hu − dyk hv ) λ (cid:19) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d yk h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d xk h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d xk h u − d yk h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) + M − T − − U SNR s u (1 − s v ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d xk h u + d yk h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! + s u ( s v − s u ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d xk h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! +(1 − s v )( s v − s u ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) j πλ d yk h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! − s u (1 − s v )( s v − s u ) U SNRσ nσ s ×× M P k =1 exp (cid:16) j πdyk hvλ (cid:17) M P k =1 exp (cid:16) j πdxk huλ (cid:17) M P k =1 exp (cid:18) − j π ( dxk hu + dyk hv ) λ (cid:19) + M P k =1 exp (cid:16) − j πdyk hvλ (cid:17) M P k =1 exp (cid:16) − j πdxk huλ (cid:17) M P k =1 exp (cid:18) j π ( dxk hu + dyk hv ) λ (cid:19) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) j πλ d yk h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d xk h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d xk h u + d yk h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) + M − T − − U SNR s v (1 − s u ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d xk h u + d yk h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! + s v ( s u − s v ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d xk h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! +(1 − s u )( s u − s v ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d yk h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! − s v (1 − s u )( s u − s v ) U SNRσ nσ s ×× M P k =1 exp (cid:16) j πdyk hvλ (cid:17) M P k =1 exp (cid:16) j πdxk huλ (cid:17) M P k =1 exp (cid:18) − j π ( dxk hu + dyk hv ) λ (cid:19) + M P k =1 exp (cid:16) − j πdyk hvλ (cid:17) M P k =1 exp (cid:16) − j πdxk huλ (cid:17) M P k =1 exp (cid:18) j π ( dxk hu + dyk hv ) λ (cid:19) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d yk h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d xk h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d xk h u + d yk h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) + M − T s u (1 − s u ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d xk h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T s v (1 − s v ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d yk h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T , (43) November 29, 2012 DRAFT6 and, of course, { G } uv = { G } vu . Consequently, the unconditional Weiss-Weinstein bound is × matrix given by: UWWB = HG − H T = 1 { G } uu { G } vv − { G } uv h u { G } vv − h u h v { G } uv − h u h v { G } uv h v { G } uu , (44)which has to be optimized over s u , s v , h u , and h v . Concerning the optimization over s u and s v , several otherworks in the literature have suggested to simply use s u = s v = 1 / . Most of the time, numerical simulationsof this simplified bound compared with the bound obtained after optimization over s u and s v leads to the sameresults while their is no formal proof of this fact (see [9] page 41 footnote 17). Note that, thanks to the expressionsobtained in the next Section concerning the linear array, we will be able to prove that s = 1 / is a (maybe notunique) correct choice for any linear array. In the case of the planar array treated in this Section, we will onlycheck this property by simulation.In the particular case where s u = s v = 1 / one obtains the following simplified expressions { G } uu = 2 (cid:16) − | h u | (cid:17) − − | h u | ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T (cid:16) − | h u | (cid:17) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T , (45) { G } vv = 2 (cid:16) − | h v | (cid:17) − − | h v | ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T (cid:16) − | h v | (cid:17) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T , (46)and { G } uv = U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d x k h u − d y k h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T − U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d x k h u + d y k h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T . (47)Again, the Weiss-Weinstein bound is obtained by using the above expressions in Eqn. (44) and after an opti-mization over the test points. The optimization over the test points can be done over a search grid or by using theambiguity diagram of the array in order to reduce significantly the computational cost (see [18], [27], [34], [45],[46]). November 29, 2012 DRAFT7
2) Conditional observation model M : Under M , let us set C SNR = σ n T P t =1 k s ( t ) k . The closed-form expres-sions of the elements of matrix G are given by (see Appendix F for the proof): { G } uu = (cid:16) − | h u | (cid:17) exp (cid:18) s u (2 s u − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h u (cid:1)(cid:19)(cid:19) + (cid:16) − | h u | (cid:17) exp (cid:18) s u − s u − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h u (cid:1)(cid:19)(cid:19) − − | h u | ) exp (cid:18) s u ( s u − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h u (cid:1)(cid:19)(cid:19) (cid:16) − | h u | (cid:17) exp (cid:18) s u ( s u − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h u (cid:1)(cid:19)(cid:19) , (48) { G } vv = (cid:16) − | h v | (cid:17) exp (cid:18) s v (2 s v − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d y k h v (cid:1)(cid:19)(cid:19) + (cid:16) − | h v | (cid:17) exp (cid:18) s v − s v − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d y k h v (cid:1)(cid:19)(cid:19) − − | h v | ) exp (cid:18) s v ( s v − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d y k h v (cid:1)(cid:19)(cid:19) (cid:16) − | h v | (cid:17) exp (cid:18) s v ( s v − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d y k h v (cid:1)(cid:19)(cid:19) , (49) { G } uv = exp s u ( s u + s v − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d xk h u (cid:1)(cid:19) +2 s v ( s u + s v − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d yk h v (cid:1)(cid:19) − s u s v C SNR (cid:18) M − M P k =1 cos (cid:0) πλ ( d xk h u − d yk h v ) (cid:1)(cid:19) + exp s u − s u + s v − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d xk h u (cid:1)(cid:19) +2( s v − s u + s v − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d yk h v (cid:1)(cid:19) − − s u )(1 − s v ) C SNR (cid:18) M − M P k =1 cos (cid:0) πλ ( d xk h u − d yk h v ) (cid:1)(cid:19) − exp s u ( s u − s v ) C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d xk h u (cid:1)(cid:19) +2(1 − s v )( s u − s v ) C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d yk h v (cid:1)(cid:19) +2 s u ( s v − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ ( d xk h u + d yk h v ) (cid:1)(cid:19) − exp s u − s u − s v ) C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d xk h u (cid:1)(cid:19) +2 s v ( s v − s u ) C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d yk h v (cid:1)(cid:19) +2( s u − s v C SNR (cid:18) M − M P k =1 cos (cid:0) πλ ( d xk h u + d yk h v ) (cid:1)(cid:19) exp (cid:18) s u ( s u − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h u (cid:1)(cid:19)(cid:19) exp (cid:18) s v ( s v − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d y k h v (cid:1)(cid:19)(cid:19) , (50)and { G } uv = { G } vu . Consequently, the conditional Weiss-Weinstein bound is × matrix given by using theabove equations in Eqn. (44). As for the unconditional case, if we set s u = s v = 1 / , one obtains the followingsimplified expressions November 29, 2012 DRAFT8 { G } uu = 2 (cid:16) − | h u | (cid:17) − − | h u | ) exp (cid:18) − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h u (cid:1)(cid:19)(cid:19)(cid:16) − | h u | (cid:17) exp (cid:18) − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h u (cid:1)(cid:19)(cid:19) , (51) { G } vv = 2 (cid:16) − | h v | (cid:17) − − | h v | ) exp (cid:18) − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d y k h v (cid:1)(cid:19)(cid:19)(cid:16) − | h v | (cid:17) exp (cid:18) − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d y k h v (cid:1)(cid:19)(cid:19) , (52) { G } uv = (cid:18) − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ ( d x k h u − d y k h v ) (cid:1)(cid:19)(cid:19) − (cid:18) − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ ( d x k h u + d y k h v ) (cid:1)(cid:19)(cid:19) exp (cid:18) − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h u (cid:1) − M P k =1 cos (cid:0) πλ d y k h v (cid:1)(cid:19)(cid:19) . (53)By using the above expressions in Eqn. (44) and after an optimization over the test points, one obtains theWeiss-Weinstein bound. C. Source localization with a non-uniform linear array
We now briefly consider the DOA estimation of a single narrow band source in the far area by using a non-uniform linear array antenna. Without loss of generality, let us assume that the linear array antenna lays on the Ox axis of the coordinate system (see Fig. 1), consequently, d y i = 0 , ∀ i . The sensor positions vector is denoted [ d x . . . d x M ] . By letting θ = sin ϕ, where ϕ denotes the elevation angle of the source, the steering vector is thengiven by a ( θ ) = (cid:20) exp (cid:18) j πλ d x θ (cid:19) . . . exp (cid:18) j πλ d x M θ (cid:19)(cid:21) T . (54)We assume that the parameter θ follows a uniform distribution over [ − , . As in Section IV-B and since theparameter of interest is a scalar, matrix H of the test points becomes a scalar denoted h θ . In the same way,there is only one element s i ∈ [0 , which will be simply denoted s . The closed-form expressions given here arestraightforwardly obtained from the aforementioned results on the planar array about the element denoted { G } uu . We will continue to use the previously introduced notations U SNR = σ s σ n ( Mσ s + σ n ) and C SNR = σ n T P t =1 k s ( t ) k .
1) Unconditional observation model M : The closed-form expression of the unconditional Weiss-Weinsteinbound, denoted
U W W B , is given by
U W W B = h θ (cid:16) − | h θ | (cid:17) s (1 − s ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h θ (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T (cid:16) − | h θ | (cid:17) s (1 − s ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h θ (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T + − s )(2 s − U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h θ (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T − − | h θ | ) s (1 − s ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h θ (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T . (55) November 29, 2012 DRAFT9
In order to find one optimal value of s that maximizes HG − H T , ∀ h θ we have considered the derivative of HG − H T w.r.t. s . The calculation (not reported here) is straightforward and it is easy to see that ∂ HG − H T ∂s (cid:12)(cid:12)(cid:12) s = =0 . Consequently, the Weiss-Weinstein bound has just to be optimized over h θ and is simplified leading to U W W B = sup h θ h θ (cid:16) − | h θ | (cid:17) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h θ (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T (cid:16) − | h θ | (cid:17) − − | h θ | ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h θ (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) !! − T . (56)In the classical case of a uniform linear array ( i.e. , d x k = d ), this expression can be still simplified by noticingthat M P k =1 exp (cid:0) − j πλ d x k h θ (cid:1) = M exp (cid:0) − j πdλ h θ (cid:1) .
2) Conditional observation model M : The closed-form expression of the conditional Weiss-Weinstein bound
CW W B is given by
CW W B = h θ (cid:16) − | h θ | (cid:17) exp (cid:18) s ( s − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h θ (cid:1)(cid:19)(cid:19) (cid:16) − | h θ | (cid:17) exp (cid:18) s (2 s − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h θ (cid:1)(cid:19)(cid:19) + exp (cid:18) s − s − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h θ (cid:1)(cid:19)(cid:19) − − | h θ | ) exp (cid:18) s ( s − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h θ (cid:1)(cid:19)(cid:19) . (57)Again, it is easy to check that ∂ HG − H T ∂s (cid:12)(cid:12)(cid:12) s = = 0 . Consequently, one optimal value of s that maximizes HG − H T , ∀ h θ is s = . The Weiss-Weinstein bound is then simplified as follows
CW W B = sup h θ h θ (cid:16) − | h θ | (cid:17) exp (cid:18) − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h θ (cid:1)(cid:19)(cid:19) (cid:16) − | h θ | (cid:17) − − | h θ | ) exp (cid:18) − C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d x k h θ (cid:1)(cid:19)(cid:19) . (58)In the classical case of a uniform linear array ( i.e. , d x k = d ), this expression can be still simplified by noticingthat M P k =1 cos (cid:0) πλ d x k h θ (cid:1) = M cos (cid:0) πdλ h θ (cid:1) . VI. S
IMULATION RESULTS AND ANALYSIS
As an illustration of the previously derived results, we first consider the scenario proposed in [23] Fig. 5, i.e. ,the DOA estimation under the unconditional model using an uniform circular array consisting of M = 16 sensorswith a half-wavelength inter-sensors spacing. The numbers of snapshots is T = 100 . Since the array is symmetric,the performance estimation concerning the parameters u and v are the same, this is why only the performance withrespect to the parameters u is given in Fig. 2. The Weiss-Weinstein bound is computed using Eqn. (45), (46) and(47). The Ziv-Zakai bound is computed using Eqn. (24) in [23]. The empirical global mean square error (MSE)of the maximum a posteriori (MAP) estimator is obtained over Monte Carlo trials. As in [23] Fig. (1b), oneobserves that both the Weiss-Weinstein bound and the Ziv-Zakai bound are tight w.r.t. the MSE of the MAP andcapture the SNR threshold. Note that, in [23] Fig. (1b), the Weiss-Weinstein bound was computed numerically only.
November 29, 2012 DRAFT0 −35 −30 −25 −20 −15 −10 −510 −6 −5 −4 −3 −2 −1 SNR [dB] M SE ZZB of uWWB of uMAP of u
Fig. 2. Ziv-Zakai bound, Weiss-Weinstein bound and empirical MSE of the MAP estimator: unconditional case.
To the best of our knowledge, their are no closed-form expressions of the Ziv-Zakai bound for the conditionalmodel available in the literature. In this case, we consider 3D source localization using a V-shaped array. Indeed,it has been shown that this kind of array is able to outperform other classical planar arrays, more particularly theuniform circular array [47]. This array is made from two branches of uniform linear arrays with 6 sensors locatedon each branches and one sensor located at the origin. We denote ∆ the angle between these two branches. Thesensors are equally spaced with a half-wavelength. The number of snapshots is T = 20 . Fig. 3 shows the behaviorof the Weiss-Weinstein bound with respect to the opening angle ∆ . One can observe that when ∆ varies, theestimation performance concerning the estimation of parameter u varies slightly. On the contrary, the estimationperformance concerning the estimation of parameter v is strongly dependent on ∆ . When ∆ increases from ◦ to ◦ , the Weiss-Weinstein bound of v decreases, as well as the SNR threshold. Fig. 3 also shows that ∆ = 90 ◦ isthe optimal value, which is different with the optimal value ∆ = 53 . ◦ in [47] since the assumptions concerningthe source signal are not the same. VII. C ONCLUSION
In this paper, the Weiss-Weinstein bound on the mean square error has been studied in the array processingcontext. In order to analyze the unconditional and conditional signal source models, the structure of the bound hasbeen detailed for both Gaussian observation models with parameterized mean or parameterized covariance matrix.A
PPENDIX
A. Closed-form expression of ´ η θ ( α, β, u , v ) under the Gaussian observation model with parameterized covariance Since y ( t ) ∼ CN ( , R y ( θ )) , one has, ´ η θ ( α, β, u , v ) = | R y ( θ ) | T ( α + β − π MT | R y ( θ + u ) | T α | R y ( θ + v ) | T β Z Ω exp − T X t =1 y H ( t ) Γ − y ( t ) ! d Y , (59) November 29, 2012 DRAFT1 −15 −10 −5 010 −4 −3 −2 −1 SNR [dB] WW B o f u ∆ =10 [DEG] ∆ =45 [DEG] ∆ =60 [DEG] ∆ =90 [DEG] ∆ =10 [DEG], s=1/2 ∆ =45 [DEG], s=1/2 ∆ =60 [DEG], s=1/2 ∆ =90 [DEG], s=1/2 −15 −10 −5 010 −3 −2 −1 SNR [dB] WW B o f v ∆ =10 [DEG] ∆ =45 [DEG] ∆ =60 [DEG] ∆ =90 [DEG] ∆ =10 [DEG], s=1/2 ∆ =45 [DEG], s=1/2 ∆ =60 [DEG], s=1/2 ∆ =90 [DEG], s=1/2 Fig. 3. Weiss-Weinstein bounds of the V-shaped array w.r.t. the opening angle ∆ . where Γ − = α R − y ( θ + u ) + β R − y ( θ + v ) − ( α + β − R − y ( θ ) . Then, since Z Ω exp ( − T X t =1 y H ( t ) Γ − y ( t ) ) d Y = π MT | Γ | T , (60)one has ´ η θ ( α, β, u , v ) = | R y ( θ ) | T ( α + β − | Γ | T | R y ( θ + u ) | T α | R y ( θ + v ) | T β = | R y ( θ ) | T ( α + β − | R y ( θ + u ) | T α | R y ( θ + v ) | T β | Γ − | T (61) B. Closed-form expression of ´ η θ ( α, β, u , v ) under the Gaussian observation model with parameterized mean Since y ( t ) ∼ CN ( f t ( θ ) , R y ) , one has ´ η θ ( α, β, u , v ) = 1 π MT | R y | T Z Ω exp − T X t =1 ξ ( t ) ! d Y , (62)with ξ ( t ) = α ( y − f t ( θ + u )) H R − y ( y − f t ( θ + u )) + β ( y − f t ( θ + v )) H R − y ( y − f t ( θ + v ))+ (1 − α − β ) ( y − f t ( θ )) H R − y ( y − f t ( θ ))= y H R − y y + α f Ht ( θ + u ) R − y f t ( θ + u ) + β f Ht ( θ + v ) R − y f t ( θ + v ) + (1 − α − β ) f Ht ( θ ) R − y f t ( θ ) − (cid:8) y H R − y ( α f t ( θ + u ) + β f t ( θ + v ) + (1 − α − β ) f t ( θ )) (cid:9) . (63) For simplicity, the dependance on t of f and y is not emphasized. November 29, 2012 DRAFT2
Let us set x = y − ( α f t ( θ + u ) + β f t ( θ + v ) + (1 − α − β ) f t ( θ )) . Consequently, x H R − y x = y H R − y y − (cid:8) y H R − y ( α f t ( θ + u ) + β f t ( θ + v ) + (1 − α − β ) f t ( θ )) (cid:9) + (cid:0) α f Ht ( θ + u ) + β f Ht ( θ + v ) + (1 − α − β ) f Ht ( θ ) (cid:1) R − y ( α f t ( θ + u ) + β f t ( θ + v ) + (1 − α − β ) f t ( θ )) (64)And ξ ( t ) can be rewritten as ξ ( t ) = x H R − y x + ´ ξ ( t ) , (65)where ´ ξ ( t ) = α (1 − α ) f Ht ( θ + u ) R − y f t ( θ + u ) + β (1 − β ) f Ht ( θ + v ) R − y f t ( θ + v )+ (1 − α − β ) ( α + β ) f Ht ( θ ) R − y f t ( θ ) − (cid:8) αβ f Ht ( θ + u ) R − y f t ( θ + v )+ α (1 − α − β ) f Ht ( θ + u ) R − y f t ( θ ) + β (1 − α − β ) f Ht ( θ + v ) R − y f t ( θ ) (cid:9) . (66)Note that ´ ξ ( t ) is independent of x . By defining X = [ x (1) , x (2) , . . . , x ( T )] , the function ´ η θ ( α, β, u , v ) becomes ´ η θ ( α, β, u , v ) = 1 π MT | R y | T Z Ω exp − T X t =1 x H R − y x + ´ ξ ( t ) ! d X = exp − T X t =1 ´ ξ ( t ) ! , (67)since π MT | R y | T R Ω exp (cid:18) − T P t =1 x H R − y x (cid:19) d X =1 . C. Closed-form expressions of (cid:12)(cid:12) m R − y ( θ ) + m R − y ( θ ) (cid:12)(cid:12) and (cid:12)(cid:12) m R − y ( θ ) + m R − y ( θ ) + m R − y ( θ ) (cid:12)(cid:12) Note that this calculation is actually an extension of the result obtained in [27] Appendix A in which m = m = and m = 0 , but follows the same method. The inverse of R y can be deduced from the Woodbury formula R − y ( θ ) = 1 σ n I M − σ s a ( θ ) a H ( θ ) σ s k a ( θ ) k + σ n ! . Then, X k =1 m k R − y ( θ k ) = 1 σ n X k =1 m k I − σ s a ( θ k ) a H ( θ k ) σ s k a ( θ k ) k + σ n ! . (68)Since the rank of a ( θ k ) a H ( θ k ) is equal to and since θ = θ = θ (except for h k = h l = ), the abovematrix has M − eigenvalues equal to σ n P k =1 m k and eigenvalues corresponding to the eigenvectors made fromthe linear combination of a ( θ ) , a ( θ ) , and a ( θ ) : a ( θ ) + p a ( θ ) + q a ( θ ) . The determinant will then be theproduct of these M eigenvalues . Let us set ϕ k = σ s σ s k a ( θ k ) k + σ n , k = 1 , , . (69)Then, the three aforementioned eigenvalues denoted λ must satisfy: X k =1 m k R − y ( θ k ) ! ( a ( θ ) + p a ( θ ) + q a ( θ )) = λ ( a ( θ ) + p a ( θ ) + q a ( θ )) . (70) Note that we are only interested by the eigenvalues. Consequently, the linear combination of of a ( θ ) , a ( θ ) , and a ( θ ) can be written a ( θ ) + p a ( θ ) + q a ( θ ) instead of r a ( θ ) + p a ( θ ) + q a ( θ ) November 29, 2012 DRAFT3
By using Eqn. (68) in the above equation and after a factorization with respect to a ( θ ) , a ( θ ) , and a ( θ ) oneobtains (cid:16) x − m ϕ k a ( θ ) k − pm ϕ a H ( θ ) a ( θ ) − qm ϕ a H ( θ ) a ( θ ) (cid:17) a ( θ )+ (cid:16) − m ϕ a H ( θ ) a ( θ ) + p (cid:16) x − m ϕ k a ( θ ) k (cid:17) − qm ϕ a H ( θ ) a ( θ ) (cid:17) a ( θ )+ (cid:16) − m ϕ a H ( θ ) a ( θ ) − m ϕ p a H ( θ ) a ( θ ) + q (cid:16) x − m ϕ k a ( θ ) k (cid:17)(cid:17) a ( θ ) = 0 , (71)where x = 1 − σ n λ. (72)Consequently, the coefficients of a ( θ ) , a ( θ ) , and a ( θ ) are equals to zero leading to a system of three equationswith two unknown ( p and q ). Solving the two first equations to find p and q , and applying the solution into thelast equation, one obtains the following polynomial equation of xx − x X k =1 m k ϕ k k a ( θ k ) k − x X k =1 3 X k ′ =1 k ′6 = k m k ϕ k m k ′ ϕ k ′ (cid:16)(cid:13)(cid:13) a H ( θ k ) a ( θ k ′ ) (cid:13)(cid:13) − k a ( θ k ) k k a ( θ k ′ ) k (cid:17) (75) − m m m ϕ ϕ ϕ (cid:16) k a ( θ ) k k a ( θ ) k k a ( θ ) k − (cid:13)(cid:13) a H ( θ ) a ( θ ) (cid:13)(cid:13) k a ( θ ) k (76) − (cid:13)(cid:13) a H ( θ ) a ( θ ) (cid:13)(cid:13) k a ( θ ) k − (cid:13)(cid:13) a H ( θ ) a ( θ ) (cid:13)(cid:13) (cid:13)(cid:13) a H ( θ ) (cid:13)(cid:13) + a H ( θ ) a ( θ ) a H ( θ ) a ( θ ) a H ( θ ) a ( θ ) (77) + a H ( θ ) a ( θ ) a H ( θ ) a ( θ ) a H ( θ ) a ( θ ) (cid:1) = 0 Since we are only interested by the product of the three eigenvalues, we do not have to solve this polynomialin λ and only the opposite of the last term is required. This leads to Eqn. (31) with P k =1 m k = 1 . Of course, theclosed-form expression of (cid:12)(cid:12) m R − y ( θ ) + m R − y ( θ ) (cid:12)(cid:12) is obtained by letting m = 0 and P k =1 m k = 1 in Eqn.(32). D. Closed-form expressions of ζ θ ( µ , ρ ) Remind that the function ζ θ ( µ , ρ ) is defined by Eqn. (18). Let us define p as the number of parameters persources (assumed to be constant for each sources). Then, without loss of generality, the full parameter vector θ can be decomposed as θ = h θ T . . . θ TN i T where θ i = [ θ i, . . . θ i,p ] T , i = 1 , . . . , N with q = N p . Remind that µ = [0 . . . µ i . . . T and ρ = (cid:2) . . . ρ j . . . (cid:3) T . It exists two distinct cases to study: when both index i and j are Note that, from Eqn. (16), P k =1 m k = 1 . p and q are given by p = m ϕ a H ( θ ) (cid:16) m ϕ a ( θ ) a H ( θ ) + (cid:16) x − m ϕ k a ( θ ) k (cid:17) I (cid:17) a ( θ ) m ϕ a H ( θ ) (cid:16) m ϕ a ( θ ) a H ( θ ) + (cid:16) x − m ϕ k a ( θ ) k (cid:17) I (cid:17) a ( θ ) , (73)and q = (cid:16) x − m ϕ k a ( θ ) k (cid:17) (cid:16) x − m ϕ k a ( θ ) k (cid:17) − m ϕ m ϕ a H ( θ ) a ( θ ) a H ( θ ) a ( θ ) m ϕ a H ( θ ) (cid:16) m ϕ a ( θ ) a H ( θ ) + (cid:16) x − m ϕ k a ( θ ) k (cid:17) I (cid:17) a ( θ ) . (74) November 29, 2012 DRAFT4 such that ( m − p + 1 ≤ i ≤ mp, m = 1 , . . . , N and ( m − p + 1 ≤ j ≤ mp or when ( m − p + 1 ≤ i ≤ mp,m = 1 , . . . , N and ( n − p + 1 ≤ j ≤ np, n = 1 , . . . , N with m = n . Therefore let us denote: µ m = [0 · · · h i · · · T ∈ R p ρ m = [0 · · · h j · · · T ∈ R p if ( m − p + 1 ≤ i, j ≤ mp (78)and µ m = [0 · · · h i · · · T ∈ R p , ρ n = [0 · · · h j · · · T ∈ R p , if ( m − p + 1 ≤ i ≤ mp ( n − p + 1 ≤ j ≤ np , with m = n. (79)
1) The case where ( m − p + 1 ≤ i, j ≤ mp : In this case, one has: A ( θ + µ ) − A ( θ + ρ ) = [ · · · ( θ m + µ m ) − a ( θ m + ρ m ) · · · ] ∈ C p × N , (80)and consequently, ζ θ ( µ , ρ ) = (cid:13)(cid:13)(cid:13) R − / n ( a ( θ m + µ m ) − a ( θ m + ρ m )) (cid:13)(cid:13)(cid:13) T X t =1 k{ s ( t ) } m k . (81)Due to Eqn. (28), one has (cid:13)(cid:13)(cid:13) R − / n ( a ( θ m + µ m ) − a ( θ m + ρ m )) (cid:13)(cid:13)(cid:13) = M X i =1 M X j =1 (cid:8) R − n (cid:9) i,j exp (cid:18) j πλ (cid:0) r Tj − r Ti (cid:1) θ m (cid:19) (cid:18) exp (cid:18) − j πλ r Ti µ m (cid:19) − exp (cid:18) − j πλ r Ti ρ m (cid:19)(cid:19) × (cid:18) exp (cid:18) j πλ r Tj µ m (cid:19) − exp (cid:18) j πλ r Tj ρ m (cid:19)(cid:19) . (82)In particular, in the case where R n = σ n I one obtains (cid:13)(cid:13)(cid:13) R − / n ( a ( θ m + µ m ) − a ( θ m + ρ m )) (cid:13)(cid:13)(cid:13) = 1 σ n M X i =1 (cid:13)(cid:13)(cid:13)(cid:13) exp (cid:18) − j πλ r Ti µ m (cid:19) − exp (cid:18) − j πλ r Ti ρ m (cid:19)(cid:13)(cid:13)(cid:13)(cid:13) . (83)
2) The case where ( m − p + 1 ≤ i ≤ mp and where ( n − p + 1 ≤ j ≤ np : Without loss generality, weassume that n > m . Then, A ( θ + µ ) − A ( θ + ρ ) = [ a ( θ ) − a ( θ ) · · · a ( θ m + µ m ) − a ( θ m ) · · · a ( θ n ) − a ( θ n + ρ n ) · · · a ( θ N ) − a ( θ N )]= [ · · · ( θ m + µ m ) − a ( θ m ) · · · ( θ m ) − a ( θ n + ρ n ) · · · ] , (84)and consequently, ζ θ ( µ , ρ ) = T X t =1 (cid:13)(cid:13)(cid:13) R − / n ( a ( θ m + µ m ) − a ( θ m )) { s ( t ) } m + ( a ( θ n ) − a ( θ n + ρ n )) { s ( t ) } n (cid:13)(cid:13)(cid:13) . (85) November 29, 2012 DRAFT5
Let us set κ = R − / n ( a ( θ m + µ m ) − a ( θ m )) and ̺ = R − / n ( a ( θ n ) − a ( θ n + ρ n )) . Then, ζ θ ( µ , ρ ) can berewritten ζ θ ( µ , ρ ) = T X t =1 k κ { s ( t ) } m + ̺ { s ( t ) } n k = T X t =1 (cid:16) κ H κ k{ s ( t ) } m k + κ H ̺ { s ( t ) } ∗ m { s ( t ) } n + ̺ H κ { s ( t ) } m { s ( t ) } ∗ n + ̺ H ̺ k{ s ( t ) } n k (cid:17) = κ H κ T X t =1 k{ s ( t ) } m k + ̺ H ̺ T X t =1 k{ s ( t ) } n k + 2 Re κ H ̺ T X t =1 { s ( t ) } ∗ m { s ( t ) } n ! . (86)By using the structure of the steering matrix A , it leads to κ H κ = M P i =1 M P j =1 (cid:8) R − n (cid:9) i,j exp (cid:0) j πλ (cid:0) r Tj − r Ti (cid:1) θ m (cid:1) exp (cid:0) − j πλ r Ti µ m (cid:1) exp (cid:0) j πλ r Tj µ m (cid:1) , ̺ H ̺ = M P i =1 M P j =1 (cid:8) R − n (cid:9) i,j exp (cid:0) j πλ (cid:0) r Tj − r Ti (cid:1) θ n (cid:1) exp (cid:0) − j πλ r Ti ρ n (cid:1) exp (cid:0) j πλ r Tj ρ n (cid:1) , κ H ̺ = − M P i =1 M P j =1 (cid:8) R − n (cid:9) i,j exp (cid:0) j πλ (cid:0) r Tj θ n − r Ti θ m (cid:1)(cid:1) exp (cid:0) − j πλ r Ti µ m (cid:1) exp (cid:0) j πλ r Tj ρ n (cid:1) . (87) E. Proof of Eqn. (41), (42) and (43)
In fact, one only has to prove Eqn. (43) since Eqn. (41) and (42) can be obtained by letting h u = h v and s u = s v in Eqn. (43) and by using ( h u , s u ) for Eqn. (41) and ( h v , s v ) for Eqn. (42). By plugging Eqn. (30) and (32) intoEqn. (16), and by considering the following expressions a H ( θ + h u ) a ( θ + h v ) = M P i =1 exp (cid:0) j πλ ( d y i h v − d x i h u ) (cid:1) = (cid:0) a H ( θ + h v ) a ( θ + h u ) (cid:1) H , a H ( θ ± h u ) a ( θ ) = M P i =1 exp (cid:0) ∓ j πλ d x i h u (cid:1) , and a H ( θ + h u ) a ( θ − h u ) = M P i =1 exp (cid:0) − j πλ d x i h u (cid:1) , one obtains the closed-form expressions for the set of functions ´ η θ ( α, β, u , v )´ η θ ( s u , s v , h u , h v ) = − U SNR s u s v (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d x k h u − d y k h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! + s u (1 − s u − s v ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! + s v (1 − s u − s v ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! − s u s v (1 − s u − s v ) U SNR σ n σ s ×× M P k =1 exp (cid:16) j πd yk h v λ (cid:17) M P k =1 exp (cid:16) − j πd xk h u λ (cid:17) M P k =1 exp (cid:16) j π ( d xk h u − d yk h v ) λ (cid:17) + M P k =1 exp (cid:16) − j πd yk h v λ (cid:17) M P k =1 exp (cid:16) j πd xk h u λ (cid:17) M P k =1 exp (cid:16) − j π ( d xk h u − d yk h v ) λ (cid:17) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d x k h u − d y k h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) + M − T (88) November 29, 2012 DRAFT6 ´ η θ (1 − s u , − s v , − h u , − h v ) = − U SNR (1 − s u )(1 − s v ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) j πλ ( d x k h u − d y k h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! +(1 − s u )( s u + s v − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! +(1 − s v )( s u + s v − (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! − (1 − s u )(1 − s v )( s u + s v − U SNR σ n σ s ×× M P k =1 exp (cid:16) j πd yk h v λ (cid:17) M P k =1 exp (cid:16) − j πd xk h u λ (cid:17) M P k =1 exp (cid:16) j π ( d xk h u − d yk h v ) λ (cid:17) + M P k =1 exp (cid:16) − j πd yk h v λ (cid:17) M P k =1 exp (cid:16) j πd xk h u λ (cid:17) M P k =1 exp (cid:16) − j π ( d xk h u − d yk h v ) λ (cid:17) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d x k h u − d y k h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) + M − T (89) ´ η θ ( s u , − s v , h u , − h v ) = − U SNR s u (1 − s v ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d x k h u + d y k h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! + s u ( s v − s u ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! +(1 − s v )( s v − s u ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! − s u (1 − s v )( s v − s u ) U SNR σ n σ s ×× M P k =1 exp (cid:16) j πd yk h v λ (cid:17) M P k =1 exp (cid:16) j πd xk h u λ (cid:17) M P k =1 exp (cid:16) − j π ( d xk h u + d yk h v ) λ (cid:17) + M P k =1 exp (cid:16) − j πd yk h v λ (cid:17) M P k =1 exp (cid:16) − j πd xk h u λ (cid:17) M P k =1 exp (cid:16) j π ( d xk h u + d yk h v ) λ (cid:17) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d x k h u + d y k h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) + M − T (90) November 29, 2012 DRAFT7 ´ η θ (1 − s u , s v , − h u , h v ) = − U SNR s v (1 − s u ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d x k h u + d y k h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! + s v ( s u − s v ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! +(1 − s u )( s u − s v ) (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M ! − s v (1 − s u )( s u − s v ) U SNR σ n σ s ×× M P k =1 exp (cid:16) j πd yk h v λ (cid:17) M P k =1 exp (cid:16) j πd xk h u λ (cid:17) M P k =1 exp (cid:16) − j π ( d xk h u + d yk h v ) λ (cid:17) + M P k =1 exp (cid:16) − j πd yk h v λ (cid:17) M P k =1 exp (cid:16) − j πd xk h u λ (cid:17) M P k =1 exp (cid:16) j π ( d xk h u + d yk h v ) λ (cid:17) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d y k h v (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ d x k h u (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) − M (cid:13)(cid:13)(cid:13)(cid:13) M P k =1 exp (cid:0) − j πλ ( d x k h u + d y k h v ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) + M − T (91) ´ η θ ( s u , , h u , ) = s u (1 − s u ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) M X k =1 exp (cid:18) − j πλ d x k h u (cid:19)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) − T , (92)and ´ η θ (0 , s v , , h v ) = s v (1 − s v ) U SNR M − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) M X k =1 exp (cid:18) − j πλ d y k h v (cid:19)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) − T . (93)One notices that the set of functions ´ η θ ( α, β, u , v ) does not depend on θ . Consequently, it is also easy toobtain the Weiss-Weinstein bound (throughout the set of functions η ( α, β, u , v ) ) by using the results of SectionIV-B whatever the considered prior on θ (only the integral R Θ p α + β ( θ + u ) p α + β − ( θ ) dθ has to be calculated or computednumerically). In our case of a uniform prior, the results are straightforward and leads to Eqn. (41), (42) and (43). F. Proof of Eqn. (48), (49) and (50)
The set of functions ´ η θ ( α, β, u , v ) is given by Eqn. (17). So, it only remains the calculation of functions ζ θ ( µ , ρ ) from Eqn. (18). Since R n = σ n I , one obtains November 29, 2012 DRAFT8 ζ θ ( h u , ) = ζ θ ( − h u , ) = 2 C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d xk h u (cid:1)(cid:19) ,ζ θ ( h v , ) = ζ θ ( − h v , ) = 2 C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d yk h v (cid:1)(cid:19) ,ζ θ ( h u , − h u ) = ζ θ ( − h u , h u ) = 2 C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d xk h u (cid:1)(cid:19) ,ζ θ ( h v , − h v ) = ζ θ ( − h v , h v ) = 2 C SNR (cid:18) M − M P k =1 cos (cid:0) πλ d yk h v (cid:1)(cid:19) ,ζ θ ( h u , h v ) = ζ θ ( h v , h u ) = ζ θ ( − h u , − h v )= ζ θ ( − h v , − h u ) = 2 C SNR (cid:18) M − M P k =1 cos (cid:0) πλ ( d xk h u − d yk h v ) (cid:1)(cid:19) ,ζ θ ( − h u , h v ) = ζ θ ( h u , − h v ) = ζ θ ( h v , − h u )= ζ θ ( − h v , h u ) = 2 C SNR (cid:18) M − M P k =1 cos (cid:0) πλ ( d xk h u + d yk h v ) (cid:1)(cid:19) ,ζ θ ( h u , h u ) = ζ θ ( h v , h v ) = ζ θ ( − h u , − h u ) = ζ θ ( − h v , − h v ) = 0 . (94)Again, since the set of functions ζ θ ( µ , ρ ) does not depend on θ , the set of functions ´ η θ ( α, β, u , v ) is given byplugging the above equations into Eqn. (17) and does not depend on θ . Consequently, as in unconditional case, theset of functions η ( α, β, u , v ) is obtained by using the results of Section IV-B whatever the considered prior on θ .In our case of a uniform prior, the results are straightforward and leads to Eqn. (48), (49) and (50).R EFERENCES[1] D. T. Vu, A. Renaux, R. Boyer, and S. Marcos, “Closed-form expression of the Weiss-Weinstein bound for 3D source localization: theconditional case,” in
Proc. of IEEE Workshop on Sensor Array and Multi-channel Processing (SAM) , vol. 1, Kibutz Ma’ale Hahamisha,Israel, Oct. 2010, pp. 125–128.[2] H. Cram´er,
Mathematical Methods of Statistics , ser. Princeton Mathematics. New-York: Princeton University Press, Sep. 1946, vol. 9.[3] E. W. Barankin, “Locally best unbiased estimates,”
The Annals of Mathematical Statistics , vol. 20, no. 4, pp. 477–501, Dec. 1949.[4] R. J. McAulay and L. P. Seidman, “A useful form of the Barankin lower bound and its application to PPM threshold analysis,”
IEEETransactions on Information Theory , vol. 15, no. 2, pp. 273–279, Mar. 1969.[5] R. J. McAulay and E. M. Hofstetter, “Barankin bounds on parameter estimation,”
IEEE Transactions on Information Theory , vol. 17,no. 6, pp. 669–676, Nov. 1971.[6] J. S. Abel, “A bound on mean square estimate error,”
IEEE Transactions on Information Theory , vol. 39, no. 5, pp. 1675–1680, Sep. 1993.[7] E. Chaumette, J. Galy, A. Quinlan, and P. Larzabal, “A new Barankin bound approximation for the prediction of the threshold regionperformance of maximum likelihood estimators,”
IEEE Transactions on Signal Processing , vol. 56, no. 11, pp. 5319–5333, Nov. 2008.[8] K. Todros and J. Tabrikian, “General classes of performance lower bounds for parameter estimation - part I: non-Bayesian bounds forunbiased estimators,”
IEEE Transactions on Information Theory , vol. 56, no. 10, pp. 5045–5063, Oct. 2010.[9] H. L. Van Trees and K. L. Bell, Eds.,
Bayesian Bounds for Parameter Estimation and Nonlinear Filtering/Tracking . New-York, NY, USA:Wiley/IEEE Press, Sep. 2007.[10] J. Ziv and M. Zakai, “Some lower bounds on signal parameter estimation,”
IEEE Transactions on Information Theory , vol. 15, no. 3, pp.386–391, May 1969.[11] S. Bellini and G. Tartara, “Bounds on error in signal parameter estimation,”
IEEE Transactions on Communications , vol. 22, no. 3, pp.340–342, Mar. 1974.[12] K. L. Bell, Y. Steinberg, Y. Ephraim, and H. L. Van Trees, “Extended Ziv-Zaka¨ı lower bound for vector parameter estimation,”
IEEETransactions on Information Theory , vol. 43, no. 2, pp. 624–637, Mar. 1997.[13] A. J. Weiss and E. Weinstein, “A lower bound on the mean square error in random parameter estimation,”
IEEE Transactions on InformationTheory , vol. 31, no. 5, pp. 680–682, Sep. 1985.
November 29, 2012 DRAFT9 [14] I. Rapoport and Y. Oshman, “WeissWeinstein lower bounds for markovian systems. part I: Theory,”
IEEE Transactions on Signal Processing ,vol. 55, no. 5, pp. 2016–2030, May 2007.[15] A. Renaux, P. Forster, P. Larzabal, C. D. Richmond, and A. Nehorai, “A fresh look at the Bayesian bounds of the Weiss-Weinstein family,”
IEEE Transactions on Signal Processing , vol. 56, no. 11, pp. 5334–5352, Nov. 2008.[16] K. Todros and J. Tabrikian, “General classes of performance lower bounds for parameter estimation - part II: Bayesian bounds,”
IEEETransactions on Information Theory , vol. 56, no. 10, pp. 5064–5082, Oct. 2010.[17] Y. Rockah and P. Schultheiss, “Array shape calibration using sources in unknown locations–part I: Far-field sources,”
IEEE Transactionson Acoustics, Speech, and Signal Processing , vol. 35, no. 3, pp. 286–299, Mar. 1987.[18] I. Reuven and H. Messer, “A Barankin-type lower bound on the estimation error of a hybrid parameter vector,”
IEEE Transactions onInformation Theory , vol. 43, no. 3, pp. 1084–1093, May 1997.[19] S. Bay, B. Geller, A. Renaux, J.-P. Barbot, and J.-M. Brossier, “On the hybrid Cram´er-Rao bound and its application to dynamical phaseestimation,”
IEEE Signal Processing Letters , vol. 15, pp. 453–456, 2008.[20] Y. Noam and H. Messer, “Notes on the tightness of the hybrid Cram´er-Rao lower bound,”
IEEE Transactions on Signal Processing , vol. 57,no. 6, pp. 2074–2084, 2009.[21] H. L. Van Trees,
Detection, Estimation and Modulation Theory . New-York, NY, USA: John Wiley & Sons, 1968, vol. 1.[22] B. Ottersten, M. Viberg, P. Stoica, and A. Nehorai, “Exact and large sample maximum likelihood techniques for parameter estimation anddetection in array processing,” in
Radar Array Processing , S. S. Haykin, J. Litva, and T. J. Shepherd, Eds. Berlin: Springer-Verlag, 1993,ch. 4, pp. 99–151.[23] K. L. Bell, Y. Ephraim, and H. L. Van Trees, “Explicit Ziv-Zaka¨ı lower bound for bearing estimation,”
IEEE Transactions on SignalProcessing , vol. 44, no. 11, pp. 2810–2824, Nov. 1996.[24] T. J. Nohara and S. Haykin, “Application of the Weiss-Weinstein bound to a two dimensional antenna array,”
IEEE Transactions onAcoustics, Speech, and Signal Processing , vol. 36, no. 9, pp. 1533–1534, Sep. 1988.[25] H. Nguyen and H. L. Van Trees, “Comparison of performance bounds for DOA estimation,” in
Proc. of IEEE Workshop on StatisticalSignal and Array Processing (SSAP) , vol. 1, Jun. 1994, pp. 313–316.[26] F. Athley, “Optimization of element positions for direction finding with sparse arrays,” in
Proc. of IEEE Workshop on Statistical SignalProcessing (SSP) , vol. 1, 2001, pp. 516–519.[27] W. Xu, A. B. Baggeroer, and C. D. Richmond, “Bayesian bounds for matched-field parameter estimation,”
IEEE Transactions on SignalProcessing , vol. 52, no. 12, pp. 3293–3305, Dec. 2004.[28] A. Renaux, “Weiss-Weinstein bound for data aided carrier estimation,”
IEEE Signal Processing Letters , vol. 14, no. 4, pp. 283–286, Apr.2007.[29] S. M. Kay,
Fundamentals of Statistical Signal Processing: Estimation Theory . Upper Saddle River, NJ, USA: Prentice-Hall, Inc., Mar.1993, vol. 1.[30] H. L. Van Trees,
Detection, Estimation and Modulation theory: Optimum Array Processing . New-York, NY, USA: John Wiley & Sons,Mar. 2002, vol. 4.[31] Z. Ben Haim and Y. Eldar, “A comment on the Weiss-Weinstein bound for constrained parameter sets,”
IEEE Transactions on InformationTheory , vol. 54, no. 10, pp. 4682–4684, Oct. 2008.[32] P. Stoica and A. Nehorai, “Performances study of conditional and unconditional direction of arrival estimation,”
IEEE Transactions onAcoustics, Speech, and Signal Processing , vol. 38, no. 10, pp. 1783–1795, Oct. 1990.[33] K. L. Bell, Y. Ephraim, and H. L. Van Trees, “Explicit Ziv-Zaka¨ı lower bounds for bearing estimation using planar arrays,” in
Proc. ofWorkshop on Adaptive Sensor Array Processing (ASAP) . Lexington, MA, USA: MIT Lincoln Laboratory, Mar. 1996.[34] I. Reuven and H. Messer, “The use of the Barankin bound for determining the threshold SNR in estimating the bearing of a sourcein the presence of another,” in
Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) , vol. 3,Detroit, MI, USA, May 1995, pp. 1645–1648.[35] J. Li and R. T. Compton, “Maximum likelihood angle estimation for signals with known waveforms,”
IEEE Transactions on SignalProcessing , vol. 41, no. 9, pp. 2850–2862, Sep. 93.[36] M. Cedervall and R. L. Moses, “Efficient maximum likelihood DOA estimation for signals with known waveforms in presence of multipath,”
IEEE Transactions on Signal Processing , vol. 45, no. 3, pp. 808–811, Mar. 1997.
November 29, 2012 DRAFT0 [37] J. Li, B. Halder, P. Stoica, and M. Viberg, “Computationally efficient angle estimation for signals with known waveforms,”
IEEETransactions on Signal Processing , vol. 43, no. 9, pp. 2154–2163, Sep. 1995.[38] A. Leshem and A.-J. Van der Veen, “Direction-of-arrival estimation for constant modulus signals,”
IEEE Transactions on Signal Processing ,vol. 47, no. 11, pp. 3125–3129, Nov. 1999.[39] Y. H. Choi, “Unified approach to Cram´er-Rao bounds in direction estimation with known signal structures,”
ELSEVIER Signal Processing ,vol. 84, no. 10, pp. 1875–1882, Oct. 2004.[40] E. Weinstein and A. J. Weiss, “A general class of lower bounds in parameter estimation,”
IEEE Transactions on Information Theory ,vol. 34, no. 2, pp. 338–342, Mar. 1988.[41] P. S. La Rosa, A. Renaux, A. Nehorai, and C. H. Muravchik, “Barankin-type lower bound on multiple change-point estimation,”
IEEETransactions on Signal Processing , vol. 58, no. 11, pp. 5534–5549, Nov. 2010.[42] H. L. Van Trees,
Detection, Estimation and Modulation Theory: Radar-Sonar Signal Processing and Gaussian Signals in Noise . New-York, NY, USA: John Wiley & Sons, Sep. 2001, vol. 3.[43] K. L. Bell, “Performance bounds in parameter estimation with application to bearing estimation,” Ph.D. dissertation, George MasonUniversity, Fairfax, VA, USA, 1995.[44] W. Xu, A. B. Baggeroer, and K. L. Bell, “A bound on mean-square estimation error with background parameter mismatch,”
IEEETransactions on Information Theory , vol. 50, no. 4, pp. 621–632, Apr. 2004.[45] J. Tabrikian and J. L. Krolik, “Barankin bounds for source localization in an uncertain ocean environment,”
IEEE Transactions on SignalProcessing , vol. 47, no. 11, pp. 2917–2927, Nov. 1999.[46] A. Renaux, L. N. Atallah, P. Forster, and P. Larzabal, “A useful form of the Abel bound and its application to estimator threshold prediction,”
IEEE Transactions on Signal Processing , vol. 55, no. 5, Part 2, pp. 2365–2369, May 2007.[47] H. Gazzah and S. Marcos, “Cram´er-Rao bounds for antenna array design,”
IEEE Transactions on Signal Processing , vol. 54, no. 1, pp.336–345, Jan. 2006., vol. 54, no. 1, pp.336–345, Jan. 2006.