Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Saurabh Prasad is active.

Publication


Featured researches published by Saurabh Prasad.


IEEE Transactions on Geoscience and Remote Sensing | 2012

Locality-Preserving Dimensionality Reduction and Classification for Hyperspectral Image Analysis

Wei Li; Saurabh Prasad; James E. Fowler; Lori Mann Bruce

Hyperspectral imagery typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image; however, when used in statistical pattern-classification tasks, the resulting high-dimensional feature spaces often tend to result in ill-conditioned formulations. Popular dimensionality-reduction techniques such as principal component analysis, linear discriminant analysis, and their variants typically assume a Gaussian distribution. The quadratic maximum-likelihood classifier commonly employed for hyperspectral analysis also assumes single-Gaussian class-conditional distributions. Departing from this single-Gaussian assumption, a classification paradigm designed to exploit the rich statistical structure of the data is proposed. The proposed framework employs local Fishers discriminant analysis to reduce the dimensionality of the data while preserving its multimodal structure, while a subsequent Gaussian mixture model or support vector machine provides effective classification of the reduced-dimension multimodal data. Experimental results on several different multiple-class hyperspectral-classification tasks demonstrate that the proposed approach significantly outperforms several traditional alternatives.


international geoscience and remote sensing symposium | 2009

Decision Fusion for the Classification of Hyperspectral Data: Outcome of the 2008 GRS-S Data Fusion Contest

Giorgio Licciardi; Fabio Pacifici; Devis Tuia; Saurabh Prasad; Terrance West; Ferdinando Giacco; Christian Thiel; Jordi Inglada; Emmanuel Christophe; Jocelyn Chanussot; Paolo Gamba

The 2008 Data Fusion Contest organized by the IEEE Geoscience and Remote Sensing Data Fusion Technical Committee deals with the classification of high-resolution hyperspectral data from an urban area. Unlike in the previous issues of the contest, the goal was not only to identify the best algorithm but also to provide a collaborative effort: The decision fusion of the best individual algorithms was aiming at further improving the classification performances, and the best algorithms were ranked according to their relative contribution to the decision fusion. This paper presents the five awarded algorithms and the conclusions of the contest, stressing the importance of decision fusion, dimension reduction, and supervised classification methods, such as neural networks and support vector machines.


IEEE Signal Processing Magazine | 2014

Manifold-Learning-Based Feature Extraction for Classification of Hyperspectral Data: A Review of Advances in Manifold Learning

Dalton Lunga; Saurabh Prasad; Melba M. Crawford; Okan K. Ersoy

Advances in hyperspectral sensing provide new capability for characterizing spectral signatures in a wide range of physical and biological systems, while inspiring new methods for extracting information from these data. HSI data often lie on sparse, nonlinear manifolds whose geometric and topological structures can be exploited via manifold-learning techniques. In this article, we focused on demonstrating the opportunities provided by manifold learning for classification of remotely sensed data. However, limitations and opportunities remain both for research and applications. Although these methods have been demonstrated to mitigate the impact of physical effects that affect electromagnetic energy traversing the atmosphere and reflecting from a target, nonlinearities are not always exhibited in the data, particularly at lower spatial resolutions, so users should always evaluate the inherent nonlinearity in the data. Manifold learning is data driven, and as such, results are strongly dependent on the characteristics of the data, and one method will not consistently provide the best results. Nonlinear manifold-learning methods require parameter tuning, although experimental results are typically stable over a range of values, and have higher computational overhead than linear methods, which is particularly relevant for large-scale remote sensing data sets. Opportunities for advancing manifold learning also exist for analysis of hyperspectral and multisource remotely sensed data. Manifolds are assumed to be inherently smooth, an assumption that some data sets may violate, and data often contain classes whose spectra are distinctly different, resulting in multiple manifolds or submanifolds that cannot be readily integrated with a single manifold representation. Developing appropriate characterizations that exploit the unique characteristics of these submanifolds for a particular data set is an open research problem for which hierarchical manifold structures appear to have merit. To date, most work in manifold learning has focused on feature extraction from single images, assuming stationarity across the scene. Research is also needed in joint exploitation of global and local embedding methods in dynamic, multitemporal environments and integration with semisupervised and active learning.


IEEE Transactions on Geoscience and Remote Sensing | 2014

Nearest Regularized Subspace for Hyperspectral Classification

Wei Li; Eric W. Tramel; Saurabh Prasad; James E. Fowler

A classifier that couples nearest-subspace classification with a distance-weighted Tikhonov regularization is proposed for hyperspectral imagery. The resulting nearest-regularized-subspace classifier seeks an approximation of each testing sample via a linear combination of training samples within each class. The class label is then derived according to the class which best approximates the test sample. The distance-weighted Tikhonov regularization is then modified by measuring distance within a locality-preserving lower-dimensional subspace. Furthermore, a competitive process among the classes is proposed to simplify parameter tuning. Classification results for several hyperspectral image data sets demonstrate superior performance of the proposed approach when compared to other, more traditional classification techniques.


IEEE Transactions on Geoscience and Remote Sensing | 2008

Decision Fusion With Confidence-Based Weight Assignment for Hyperspectral Target Recognition

Saurabh Prasad; Lori Mann Bruce

Conventional hyperspectral image-based automatic target recognition (ATR) systems project high-dimensional reflectance signatures onto a lower dimensional subspace using techniques such as principal components analysis (PCA), Fishers linear discriminant analysis (LDA), and stepwise LDA. Typically, these feature space projections are suboptimal. In a typical hyperspectral ATR setup, the number of training signatures (ground truth) is often less than the dimensionality of the signatures. Standard dimensionality reduction tools such as LDA and PCA cannot be applied in such situations. In this paper, we present a divide-and-conquer approach that addresses this problem for robust ATR. We partition the hyperspectral space into contiguous subspaces based on the optimization of a performance metric. We then make local classification decisions in every subspace using a multiclassifier system and employ a decision fusion system for making the final decision on the class label. In this work, we propose a metric that incorporates higher order statistical information for accurate partitioning of the hyperspectral space. We also propose an adaptive weight assignment method in the decision fusion process based on the strengths (as measured by the training accuracies) of individual classifiers that made the local decisions. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies. The proposed system was found to significantly outperform conventional approaches. For example, under moderate pixel mixing, the proposed approach resulted in classification accuracies around 90%, where traditional feature fusion resulted in accuracies around 65%.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2014

Hyperspectral and LiDAR Data Fusion: Outcome of the 2013 GRSS Data Fusion Contest

Christian Debes; Andreas Merentitis; Roel Heremans; Jürgen T. Hahn; Nikolaos Frangiadakis; Tim Van Kasteren; Wenzhi Liao; Rik Bellens; Aleksandra Pizurica; Sidharta Gautama; Wilfried Philips; Saurabh Prasad; Qian Du; Fabio Pacifici

The 2013 Data Fusion Contest organized by the Data Fusion Technical Committee (DFTC) of the IEEE Geoscience and Remote Sensing Society aimed at investigating the synergistic use of hyperspectral and Light Detection And Ranging (LiDAR) data. The data sets distributed to the participants during the Contest, a hyperspectral imagery and the corresponding LiDAR-derived digital surface model (DSM), were acquired by the NSF-funded Center for Airborne Laser Mapping over the University of Houston campus and its neighboring area in the summer of 2012. This paper highlights the two awarded research contributions, which investigated different approaches for the fusion of hyperspectral and LiDAR data, including a combined unsupervised and supervised classification scheme, and a graph-based method for the fusion of spectral, spatial, and elevation information.


IEEE Geoscience and Remote Sensing Letters | 2008

Limitations of Principal Components Analysis for Hyperspectral Target Recognition

Saurabh Prasad; Lori Mann Bruce

Dimensionality reduction is a necessity in most hyperspectral imaging applications. Tradeoffs exist between unsupervised statistical methods, which are typically based on principal components analysis (PCA), and supervised ones, which are often based on Fishers linear discriminant analysis (LDA), and proponents for each approach exist in the remote sensing community. Recently, a combined approach known as subspace LDA has been proposed, where PCA is employed to recondition ill-posed LDA formulations. The key idea behind this approach is to use a PCA transformation as a preprocessor to discard the null space of rank-deficient scatter matrices, so that LDA can be applied on this reconditioned space. Thus, in theory, the subspace LDA technique benefits from the advantages of both methods. In this letter, we present a theoretical analysis of the effects (often ill effects) of PCA on the discrimination power of the projected subspace. The theoretical analysis is presented from a general pattern classification perspective for two possible scenarios: (1) when PCA is used as a simple dimensionality reduction tool and (2) when it is used to recondition an ill-posed LDA formulation. We also provide experimental evidence of the ineffectiveness of both scenarios for hyperspectral target recognition applications.


IEEE Geoscience and Remote Sensing Letters | 2011

Locality-Preserving Discriminant Analysis in Kernel-Induced Feature Spaces for Hyperspectral Image Classification

Wei Li; Saurabh Prasad; James E. Fowler; Lori Mann Bruce

Linear discriminant analysis (LDA) has been widely applied for hyperspectral image (HSI) analysis as a popular method for feature extraction and dimensionality reduction. Linear methods such as LDA work well for unimodal Gaussian class-conditional distributions. However, when data samples between classes are nonlinearly separated in the input space, linear methods such as LDA are expected to fail. The kernel discriminant analysis (KDA) attempts to address this issue by mapping data in the input space onto a subspace such that Fishers ratio in an intermediate (higher-dimensional) kernel-induced space is maximized. In recent studies with HSI data, KDA has been shown to outperform LDA, particularly when the data distributions are non-Gaussian and multimodal, such as when pixels represent target classes severely mixed with background classes. In this letter, a modified KDA algorithm, i.e., kernel local Fisher discriminant analysis (KLFDA), is studied for HSI analysis. Unlike KDA, KLFDA imposes an additional constraint on the mapping-it ensures that neighboring points in the input space stay close-by in the projected subspace and vice versa. Classification experiments with a challenging HSI task demonstrate that this approach outperforms current state-of-the-art HSI-classification methods.


international conference of the ieee engineering in medicine and biology society | 2013

High accuracy decoding of user intentions using EEG to control a lower-body exoskeleton

Atilla Kilicarslan; Saurabh Prasad; Robert G. Grossman; Jose L. Contreras-Vidal

Brain-Machine Interface (BMI) systems allow users to control external mechanical systems using their thoughts. Commonly used in literature are invasive techniques to acquire brain signals and decode users attempted motions to drive these systems (e.g. a robotic manipulator). In this work we use a lower-body exoskeleton and measure the users brain activity using non-invasive electroencephalography (EEG). The main focus of this study is to decode a paraplegic subjects motion intentions and provide him with the ability of walking with a lower-body exoskeleton accordingly. We present our novel method of decoding with high offline evaluation accuracies (around 98%), our closed loop implementation structure with considerably short on-site training time (around 38 sec), and preliminary results from the real-time closed loop implementation (NeuroRex) with a paraplegic test subject.


IEEE Geoscience and Remote Sensing Letters | 2014

Hyperspectral Image Classification Using Gaussian Mixture Models and Markov Random Fields

Wei Li; Saurabh Prasad; James E. Fowler

The Gaussian mixture model is a well-known classification tool that captures non-Gaussian statistics of multivariate data. However, the impractically large size of the resulting parameter space has hindered widespread adoption of Gaussian mixture models for hyperspectral imagery. To counter this parameter-space issue, dimensionality reduction targeting the preservation of multimodal structures is proposed. Specifically, locality-preserving nonnegative matrix factorization, as well as local Fishers discriminant analysis, is deployed as preprocessing to reduce the dimensionality of data for the Gaussian-mixture-model classifier, while preserving multimodal structures within the data. In addition, the pixel-wise classification results from the Gaussian mixture model are combined with spatial-context information resulting from a Markov random field. Experimental results demonstrate that the proposed classification system significantly outperforms other approaches even under limited training data.

Collaboration


Dive into the Saurabh Prasad's collaboration.

Top Co-Authors

Avatar

Lori Mann Bruce

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James E. Fowler

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar

Wei Li

Beijing University of Chemical Technology

View shared research outputs
Top Co-Authors

Avatar

Hao Wu

University of Houston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James V. Aanstoos

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hemanth Kalluri

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar

Majid Mahrooghy

Mississippi State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge