Prabhu Babu
Indian Institute of Technology Delhi
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Prabhu Babu.
IEEE Transactions on Signal Processing | 2011
Petre Stoica; Prabhu Babu; Jian Li
This paper presents a novel SParse Iterative Covariance-based Estimation approach, abbreviated as SPICE, to array processing. The proposed approach is obtained by the minimization of a covariance matrix fitting criterion and is particularly useful in many-snapshot cases but can be used even in single-snapshot situations. SPICE has several unique features not shared by other sparse estimation methods: it has a simple and sound statistical foundation, it takes account of the noise in the data in a natural manner, it does not require the user to make any difficult selection of hyperparameters, and yet it has global convergence properties.
IEEE Transactions on Signal Processing | 2011
Petre Stoica; Prabhu Babu; Jian Li
Separable models occur frequently in spectral analysis, array processing, radar imaging and astronomy applications. Statistical inference methods for these models can be categorized in three large classes: parametric, nonparametric (also called “dense”) and semiparametric (also called “sparse”). We begin by discussing the advantages and disadvantages of each class. Then we go on to introduce a new semiparametric/sparse method called SPICE (a semiparametric/sparse iterative covariance-based estimation method). SPICE is computationally quite efficient, enjoys global convergence properties, can be readily used in the case of replicated measurements and, unlike most other sparse estimation methods, does not require any subtle choices of user parameters. We illustrate the statistical performance of SPICE by means of a line-spectrum estimation study for irregularly sampled data.
Digital Signal Processing | 2010
Prabhu Babu; Petre Stoica
In this paper, we present a comprehensive review of methods for spectral analysis of nonuniformly sampled data. For a given finite set of nonuniformly sampled data, a reasonable way to choose the Nyquist frequency and the resampling time are discussed. The various existing methods for spectral analysis of nonuniform data are grouped and described under four broad categories: methods based on least squares; methods based on interpolation techniques; methods based on slotted resampling; methods based on continuous time models. The performance of the methods under each category is evaluated on simulated data sets. The methods are then classified according to their capabilities to handle different types of spectrum, signal models and sampling patterns. Finally the performance of the different methods is evaluated on two real life nonuniform data sets. Apart from the spectral analysis methods, methods for exact signal reconstruction from nonuniform data are also reviewed.
Signal Processing | 2012
Petre Stoica; Prabhu Babu
SPICE (SParse Iterative Covariance-based Estimation) is a recently introduced method for sparse-parameter estimation in linear models using a robust covariance fitting criterion that does not depend on any hyperparameters. In this paper we revisit the derivation of SPICE to streamline it and to provide further insights into this method. LIKES (LIKelihood-based Estimation of Sparse parameters) is a new method obtained in a hyperparameter-free manner from the maximum-likelihood principle applied to the same estimation problem as considered by SPICE. Both SPICE and LIKES are shown to provide accurate parameter estimates even from scarce data samples, with LIKES being more accurate than SPICE at the cost of an increased computational burden.
IEEE Transactions on Signal Processing | 2015
Junxiao Song; Prabhu Babu; Daniel Pérez Palomar
Unimodular sequences with low autocorrelation are desired in many applications, especially in radar systems and code-division multiple access (CDMA) communication systems. In this paper, we propose a new algorithm to design unimodular sequences with low autocorrelation via directly minimizing the integrated sidelobe level (ISL) of the autocorrelation. The algorithm is derived based on the general framework of majorization-minimization (MM) algorithms and thus shares the monotonic property of such methods, and two acceleration schemes have been considered to accelerate the overall convergence. In addition, the proposed algorithm can be implemented via fast Fourier transform (FFT) operations and thus is computationally efficient. Furthermore, after some modifications the algorithm can be adapted to incorporate spectral constraints, which makes the design more flexible. Numerical experiments show that the proposed algorithms outperform existing ones in terms of both the merit factors of designed sequences and the computational complexity.
IEEE Transactions on Signal Processing | 2017
Ying Sun; Prabhu Babu; Daniel Pérez Palomar
This paper gives an overview of the majorization-minimization (MM) algorithmic framework, which can provide guidance in deriving problem-driven algorithms with low computational cost. A general introduction of MM is presented, including a description of the basic principle and its convergence results. The extensions, acceleration schemes, and connection to other algorithmic frameworks are also covered. To bridge the gap between theory and practice, upperbounds for a large number of basic functions, derived based on the Taylor expansion, convexity, and special inequalities, are provided as ingredients for constructing surrogate functions. With the pre-requisites established, the way of applying MM to solving specific problems is elaborated by a wide range of applications in signal processing, communications, and machine learning.
IEEE Transactions on Signal Processing | 2016
Junxiao Song; Prabhu Babu; Daniel Pérez Palomar
Sequences with low aperiodic autocorrelation sidelobes are well known to have extensive applications in active sensing and communication systems. In this paper, we first consider the problem of minimizing the weighted integrated sidelobe level (WISL), which can be used to design sequences with impulse-like autocorrelation and a zero (or low) correlation zone. Two algorithms based on the general majorization-minimization method are developed to tackle the WISL minimization problem with guaranteed convergence to a stationary point. The proposed methods are then extended to optimize the lp-norm of the autocorrelation sidelobes, which leads to a way to minimize the peak sidelobe level (PSL) criterion. All the proposed algorithms can be implemented via the fast Fourier transform (FFT) and thus are computationally efficient. An acceleration scheme is considered to further accelerate the algorithms. Numerical experiments show that the proposed algorithms can efficiently generate sequences with virtually zero autocorrelation sidelobes in a specified lag interval and can also produce very long sequences with much smaller PSL compared with some well known analytical sequences.
IEEE Transactions on Signal Processing | 2014
Ying Sun; Prabhu Babu; Daniel Pérez Palomar
This paper considers the regularized Tylers scatter estimator for elliptical distributions, which has received considerable attention recently. Various types of shrinkage Tylers estimators have been proposed in the literature and proved work effectively in the “large p small n” scenario. Nevertheless, the existence and uniqueness properties of the estimators are not thoroughly studied, and in certain cases the algorithms may fail to converge. In this work, we provide a general result that analyzes the sufficient condition for the existence of a family of shrinkage Tylers estimators, which quantitatively shows that regularization indeed reduces the number of required samples for estimation and the convergence of the algorithms for the estimators. For two specific shrinkage Tylers estimators, we also proved that the condition is necessary and the estimator is unique. Finally, we show that the two estimators are actually equivalent. Numerical algorithms are also derived based on the majorization-minimization framework, under which the convergence is analyzed systematically.
IEEE Transactions on Signal Processing | 2016
Junxiao Song; Prabhu Babu; Daniel Pérez Palomar
Sets of sequences with good correlation properties are desired in many active sensing and communication systems, e.g., multiple-input-multiple-output (MIMO) radar systems and code-division multiple-access (CDMA) cellular systems. In this paper, we consider the problems of designing complementary sets of sequences (CSS) and also sequence sets with both good auto- and cross-correlation properties. Algorithms based on the general majorization-minimization method are developed to tackle the optimization problems arising from the sequence set design problems. All the proposed algorithms can be implemented by means of the fast Fourier transform (FFT) and thus are computationally efficient and capable of designing sets of very long sequences. A number of numerical examples are provided to demonstrate the performance of the proposed algorithms.
IEEE Transactions on Signal Processing | 2012
Petre Stoica; Prabhu Babu
The Bayesian Information Criterion (BIC) is often presented in a form that is only valid in large samples and under a certain condition on the rate at which the Fisher Information Matrix (FIM) increases with the sample length. This form has been improperly used previously in situations in which the conditions mentioned above do not hold. In this correspondence, we describe the proper forms of BIC in several practically relevant cases that do not satisfy the above assumptions. In particular, we present a new form of BIC for high signal-to-noise ratio (SNR) cases. The conclusion of this study is that BIC remains one of the most successful existing rules for model order selection, if properly used.