Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bhaskar D. Rao is active.

Publication


Featured researches published by Bhaskar D. Rao.


IEEE Transactions on Signal Processing | 1997

Sparse signal reconstruction from limited data using FOCUSS: a re-weighted minimum norm algorithm

Irina Gorodnitsky; Bhaskar D. Rao

We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a low-resolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in direction-of-arrival (DOA) estimation and neuromagnetic imaging.


IEEE Journal on Selected Areas in Communications | 2008

An overview of limited feedback in wireless communication systems

David J. Love; Robert W. Heath; Vincent Kin Nang Lau; David Gesbert; Bhaskar D. Rao; Matthew Andrews

It is now well known that employing channel adaptive signaling in wireless communication systems can yield large improvements in almost any performance metric. Unfortunately, many kinds of channel adaptive techniques have been deemed impractical in the past because of the problem of obtaining channel knowledge at the transmitter. The transmitter in many systems (such as those using frequency division duplexing) can not leverage techniques such as training to obtain channel state information. Over the last few years, research has repeatedly shown that allowing the receiver to send a small number of information bits about the channel conditions to the transmitter can allow near optimal channel adaptation. These practical systems, which are commonly referred to as limited or finite-rate feedback systems, supply benefits nearly identical to unrealizable perfect transmitter channel knowledge systems when they are judiciously designed. In this tutorial, we provide a broad look at the field of limited feedback wireless communications. We review work in systems using various combinations of single antenna, multiple antenna, narrowband, broadband, single-user, and multiuser technology. We also provide a synopsis of the role of limited feedback in the standardization of next generation wireless systems.


IEEE Transactions on Signal Processing | 2005

Sparse solutions to linear inverse problems with multiple measurement vectors

Shane F. Cotter; Bhaskar D. Rao; Kjersti Engan; Kenneth Kreutz-Delgado

We address the problem of finding sparse solutions to an underdetermined system of equations when there are multiple measurement vectors having the same, but unknown, sparsity structure. The single measurement sparse solution problem has been extensively studied in the past. Although known to be NP-hard, many single-measurement suboptimal algorithms have been formulated that have found utility in many different applications. Here, we consider in depth the extension of two classes of algorithms-Matching Pursuit (MP) and FOCal Underdetermined System Solver (FOCUSS)-to the multiple measurement case so that they may be used in applications such as neuromagnetic imaging, where multiple measurement vectors are available, and solutions with a common sparsity structure must be computed. Cost functions appropriate to the multiple measurement problem are developed, and algorithms are derived based on their minimization. A simulation study is conducted on a test-case dictionary to show how the utilization of more than one measurement vector improves the performance of the MP and FOCUSS classes of algorithm, and their performances are compared.


IEEE Transactions on Signal Processing | 2004

Sparse Bayesian learning for basis selection

David P. Wipf; Bhaskar D. Rao

Sparse Bayesian learning (SBL) and specifically relevance vector machines have received much attention in the machine learning literature as a means of achieving parsimonious representations in the context of regression and classification. The methodology relies on a parameterized prior that encourages models with few nonzero weights. In this paper, we adapt SBL to the signal processing problem of basis selection from overcomplete dictionaries, proving several results about the SBL cost function that elucidate its general behavior and provide solid theoretical justification for this application. Specifically, we have shown that SBL retains a desirable property of the /spl lscr//sub 0/-norm diversity measure (i.e., the global minimum is achieved at the maximally sparse solution) while often possessing a more limited constellation of local minima. We have also demonstrated that the local minima that do exist are achieved at sparse solutions. Later, we provide a novel interpretation of SBL that gives us valuable insight into why it is successful in producing sparse representations. Finally, we include simulation studies comparing sparse Bayesian learning with basis pursuit and the more recent FOCal Underdetermined System Solver (FOCUSS) class of basis selection algorithms. These results indicate that our theoretical insights translate directly into improved performance.


Neural Computation | 2003

Dictionary learning algorithms for sparse representation

Kenneth Kreutz-Delgado; Joseph F. Murray; Bhaskar D. Rao; Kjersti Engan; Te-Won Lee; Terrence J. Sejnowski

Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial 25 words or less), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an overcomplete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error).


IEEE Transactions on Communications | 2002

Sparse channel estimation via matching pursuit with application to equalization

Shane F. Cotter; Bhaskar D. Rao

Channels with a sparse impulse response arise in a number of communication applications. Exploiting the sparsity of the channel, we show how an estimate of the channel may be obtained using a matching pursuit (MP) algorithm. This estimate is compared to thresholded variants of the least squares (LS) channel estimate. Among these sparse channel estimates, the MP estimate is computationally much simpler to implement and a shorter training sequence is required to form an accurate channel estimate leading to greater information throughput.


Electroencephalography and Clinical Neurophysiology | 1995

Neuromagnetic source imaging with FOCUSS: a recursive weighted minimum norm algorithm

Irina Gorodnitsky; John S. George; Bhaskar D. Rao

The paper describes a new algorithm for tomographic source reconstruction in neural electromagnetic inverse problems. Termed FOCUSS (FOCal Underdetermined System Solution), this algorithm combines the desired features of the two major approaches to electromagnetic inverse procedures. Like multiple current dipole modeling methods, FOCUSS produces high resolution solutions appropriate for the highly localized sources often encountered in electromagnetic imaging. Like linear estimation methods, FOCUSS allows current sources to assume arbitrary shapes and it preserves the generality and ease of application characteristic of this group of methods. It stands apart from standard signal processing techniques because, as an initialization-dependent algorithm, it accommodates the non-unique set of feasible solutions that arise from the neuroelectric source constraints. FOCUSS is based on recursive, weighted norm minimization. The consequence of the repeated weighting procedure is, in effect, to concentrate the solution in the minimal active regions that are essential for accurately reproducing the measurements. The FOCUSS algorithm is introduced and its properties are illustrated in the context of a number of simulations, first using exact measurements in 2- and 3-D problems, and then in the presence of noise and modeling errors. The results suggest that FOCUSS is a powerful algorithm with considerable utility for tomographic current estimation.


IEEE Transactions on Signal Processing | 1999

An affine scaling methodology for best basis selection

Bhaskar D. Rao; Kenneth Kreutz-Delgado

A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser (1994) and Donoho (1994). These measures include the p-norm-like (l/sub (p/spl les/1)/) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodology uses a factored representation for the gradient and involves successive relaxation of the Lagrangian necessary condition. This yields algorithms that are intimately related to the affine scaling transformation (AST) based methods commonly employed by the interior point approach to nonlinear optimization. The algorithms minimizing the (l/sub (p/spl les/1)/) diversity measures are equivalent to a previously developed class of algorithms called focal underdetermined system solver (FOCUSS). The general nature of the methodology provides a systematic approach for deriving this class of algorithms and a natural mechanism for extending them. It also facilitates a better understanding of the convergence behavior and a strengthening of the convergence results. The Gaussian entropy minimization algorithm is shown to be equivalent to a well-behaved p=0 norm-like optimization algorithm. Computer experiments demonstrate that the p-norm-like and the Gaussian entropy algorithms perform well, converging to sparse solutions. The Shannon entropy algorithm produces solutions that are concentrated but are shown to not converge to a fully sparse solution.


international conference on pattern recognition | 1994

Towards robust automatic traffic scene analysis in real-time

Daphne Koller; Joseph Weber; Timothy Huang; Jitendra Malik; Gary H. Ogasawara; Bhaskar D. Rao; Stuart J. Russell

Automatic symbolic traffic scene analysis is essential to many areas of IVHS (Intelligent Vehicle Highway Systems). Traffic scene information can be used to optimize traffic flow during busy periods, identify stalled vehicles and accidents, and aid the decision-making of an autonomous vehicle controller. Improvements in technologies for machine vision-based surveillance and high-level symbolic reasoning have enabled the authors to develop a system for detailed, reliable traffic scene analysis. The machine vision component of the system employs a contour tracker and an affine motion model based on Kalman filters to extract vehicle trajectories over a sequence of traffic scene images. The symbolic reasoning component uses a dynamic belief network to make inferences about traffic events such as vehicle lane changes and stalls. In this paper, the authors discuss the key tasks of the vision and reasoning components as well as their integration into a working prototype. Preliminary results of an implementation on special purpose hardware using C-40 Digital Signal Processors show that near real-time performance can be achieved without further improvements.


IEEE Transactions on Signal Processing | 2007

An Empirical Bayesian Strategy for Solving the Simultaneous Sparse Approximation Problem

David P. Wipf; Bhaskar D. Rao

Given a large overcomplete dictionary of basis vectors, the goal is to simultaneously represent L>1 signal vectors using coefficient expansions marked by a common sparsity profile. This generalizes the standard sparse representation problem to the case where multiple responses exist that were putatively generated by the same small subset of features. Ideally, the associated sparse generating weights should be recovered, which can have physical significance in many applications (e.g., source localization). The generic solution to this problem is intractable and, therefore, approximate procedures are sought. Based on the concept of automatic relevance determination, this paper uses an empirical Bayesian prior to estimate a convenient posterior distribution over candidate basis vectors. This particular approximation enforces a common sparsity profile and consistently places its prominent posterior mass on the appropriate region of weight-space necessary for simultaneous sparse recovery. The resultant algorithm is then compared with multiple response extensions of matching pursuit, basis pursuit, FOCUSS, and Jeffreys prior-based Bayesian methods, finding that it often outperforms the others. Additional motivation for this particular choice of cost function is also provided, including the analysis of global and local minima and a variational derivation that highlights the similarities and differences between the proposed algorithm and previous approaches.

Collaboration


Dive into the Bhaskar D. Rao's collaboration.

Top Co-Authors

Avatar

Yichao Huang

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chandra R. Murthy

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Jun Zheng

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ritwik Giri

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David P. Wipf

University of California

View shared research outputs
Top Co-Authors

Avatar

Ethan R. Duni

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge