Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Saikat Basu is active.

Publication


Featured researches published by Saikat Basu.


advances in geographic information systems | 2015

DeepSat: a learning framework for satellite imagery

Saikat Basu; Sangram Ganguly; Supratik Mukhopadhyay; Robert DiBiano; Manohar Karki; Ramakrishna R. Nemani

Satellite image classification is a challenging problem that lies at the crossroads of remote sensing, computer vision, and machine learning. Due to the high variability inherent in satellite data, most of the current object classification approaches are not suitable for handling satellite datasets. The progress of satellite image analytics has also been inhibited by the lack of a single labeled high-resolution dataset with multiple class labels. The contributions of this paper are twofold -- (1) first, we present two new satellite datasets called SAT-4 and SAT-6, and (2) then, we propose a classification framework that extracts features from an input image, normalizes them and feeds the normalized feature vectors to a Deep Belief Network for classification. On the SAT-4 dataset, our best network produces a classification accuracy of 97.95% and outperforms three state-of-the-art object recognition algorithms, namely - Deep Belief Networks, Convolutional Neural Networks and Stacked Denoising Autoencoders by ~11%. On SAT-6, it produces a classification accuracy of 93.9% and outperforms the other algorithms by ~15%. Comparative studies with a Random Forest classifier show the advantage of an unsupervised learning approach over traditional supervised learning techniques. A statistical analysis based on Distribution Separability Criterion and Intrinsic Dimensionality Estimation substantiates the effectiveness of our approach in learning better representations for satellite imagery.


IEEE Transactions on Geoscience and Remote Sensing | 2015

A Semiautomated Probabilistic Framework for Tree-Cover Delineation From 1-m NAIP Imagery Using a High-Performance Computing Architecture

Saikat Basu; Sangram Ganguly; Ramakrishna R. Nemani; Supratik Mukhopadhyay; Gong Zhang; Cristina Milesi; A. R. Michaelis; Petr Votava; Ralph Dubayah; Laura Duncanson; Bruce D. Cook; Yifan Yu; Sassan Saatchi; Robert DiBiano; Manohar Karki; Edward Boyda; Uttam Kumar; Shuang Li

Accurate tree-cover estimates are useful in deriving above-ground biomass density estimates from very high resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree-cover delineation in high-to-coarse-resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR data sets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree-cover estimates for the whole of Continental United States, using a high-performance computing architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on conditional random field, which helps in capturing the higher order contextual dependence relations between neighboring pixels. Once the final probability maps are generated, the framework is updated and retrained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates (FPRs). The tree-cover maps were generated for the state of California, which covers a total of 11 095 NAIP tiles and spans a total geographical area of 163 696 sq. miles. Our framework produced correct detection rates of around 88% for fragmented forests and 74% for urban tree-cover areas, with FPRs lower than 2% for both regions. Comparative studies with the National Land-Cover Data algorithm and the LiDAR high-resolution canopy height model showed the effectiveness of our algorithm for generating accurate high-resolution tree-cover maps.


Neural Processing Letters | 2017

Learning Sparse Feature Representations Using Probabilistic Quadtrees and Deep Belief Nets

Saikat Basu; Manohar Karki; Sangram Ganguly; Robert DiBiano; Supratik Mukhopadhyay; Shreekant Gayaka; Rajgopal Kannan; Ramakrishna R. Nemani

Learning sparse feature representations is a useful instrument for solving an unsupervised learning problem. In this paper, we present three labeled handwritten digit datasets, collectively called n-MNIST by adding noise to the MNIST dataset, and three labeled datasets formed by adding noise to the offline Bangla numeral database. Then we propose a novel framework for the classification of handwritten digits that learns sparse representations using probabilistic quadtrees and Deep Belief Nets. On the MNIST, n-MNIST and noisy Bangla datasets, our framework shows promising results and outperforms traditional Deep Belief Networks.


international joint conference on neural network | 2016

A theoretical analysis of Deep Neural Networks for texture classification.

Saikat Basu; Manohar Karki; Supratik Mukhopadhyay; Sangram Ganguly; Ramakrishna R. Nemani; Robert DiBiano; Shreekant Gayaka

We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity.


International Journal of Network Security | 2012

A New Parallel Window-Based Implementation of the Elliptic Curve Point Multiplication in Multi-Core Architectures

Saikat Basu

Point multiplication is an important computation in elliptic curve cryptography. Various methods like binary method and window method have been implemented in the past for performing efficient elliptic curve point multiplications. However, all these implementations rely on serial computations performed on uni-core architectures. A new approach on multi-core implementation has been proposed in this paper. Hence, a new parallel algorithm has been designed and implemented on machines with upto 8 cores. Later, experimental studies have been performed with different window sizes and degrees of parallelism.


Neural Networks | 2018

Deep neural networks for texture classification—A theoretical analysis

Saikat Basu; Supratik Mukhopadhyay; Manohar Karki; Robert DiBiano; Sangram Ganguly; Ramakrishna R. Nemani; Shreekant Gayaka

We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity.


IEEE Transactions on Big Data | 2017

Adaptable SLA-Aware Consistency Tuning for Quorum-Replicated Datastores

Subhajit Sidhanta; Wojciech M. Golab; Supratik Mukhopadhyay; Saikat Basu

Users of distributed datastores that employ quorum-based replication are burdened with the choice of a suitable client-centric consistency setting for each storage operation. The above matching choice is difficult to reason about as it requires deliberating about the tradeoff between the latency and staleness, i.e., how stale (old) the result is. The latency and staleness for a given operation depend on the client-centric consistency setting applied, as well as dynamic parameters such as the current workload and network condition. We present OptCon, a machine learning-based predictive framework, that can automate the choice of client-centric consistency setting under user-specified latency and staleness thresholds given in the service level agreement (SLA). Under a given SLA, OptCon predicts a client-centric consistency setting that is matching, i.e., it is weak enough to satisfy the latency threshold, while being strong enough to satisfy the staleness threshold. While manually tuned consistency settings remain fixed unless explicitly reconfigured, OptCon tunes consistency settings on a per-operation basis with respect to changing workload and network state. Using decision tree learning, OptCon yields 0.14 cross validation error in predicting matching consistency settings under latency and staleness thresholds given in the SLA. We demonstrate experimentally that OptCon is at least as effective as any manually chosen consistency settings in adapting to the SLA thresholds for different use cases. We also demonstrate that OptCon adapts to variations in workload, whereas a given manually chosen fixed consistency setting satisfies the SLA only for a characteristic workload.


cluster computing and the grid | 2016

OptCon: An Adaptable SLA-Aware Consistency Tuning Framework for Quorum-Based Stores

Subhajit Sidhanta; Wojciech M. Golab; Supratik Mukhopadhyay; Saikat Basu

Users of distributed datastores that employquorum-based replication are burdened with the choice of asuitable client-centric consistency setting for each storage operation. The above matching choice is difficult to reason about asit requires deliberating about the tradeoff between the latencyand staleness, i.e., how stale (old) the result is. The latencyand staleness for a given operation depend on the client-centricconsistency setting applied, as well as dynamic parameters such asthe current workload and network condition. We present OptCon, a novel machine learning-based predictive framework, that canautomate the choice of client-centric consistency setting underuser-specified latency and staleness thresholds given in the servicelevel agreement (SLA). Under a given SLA, OptCon predictsa client-centric consistency setting that is matching, i.e., it isweak enough to satisfy the latency threshold, while being strongenough to satisfy the staleness threshold. While manually tunedconsistency settings remain fixed unless explicitly reconfigured, OptCon tunes consistency settings on a per-operation basis withrespect to changing workload and network state. Using decisiontree learning, OptCon yields 0.14 cross validation error in predictingmatching consistency settings under latency and stalenessthresholds given in the SLA. We demonstrate experimentally thatOptCon is at least as effective as any manually chosen consistencysettings in adapting to the SLA thresholds for different usecases. We also demonstrate that OptCon adapts to variationsin workload, whereas a given manually chosen fixed consistencysetting satisfies the SLA only for a characteristic workload.


international conference on artificial neural networks | 2017

Core Sampling Framework for Pixel Classification

Manohar Karki; Robert DiBiano; Saikat Basu; Supratik Mukhopadhyay

The intermediate map responses of a Convolutional Neural Network (CNN) contain contextual knowledge about its input. In this paper, we present a framework that uses these activation maps from several layers of a CNN as features to a Deep Belief Network (DBN) using transfer learning to provide an understanding of an input image. We create a representation of these features and the training data and use them to extract more information from an image at the pixel level, hence gaining understanding of the whole image. We experimentally demonstrate the usefulness of our framework using a pretrained model and use a DBN to perform segmentation on the BAERI dataset of Synthetic Aperture Radar (SAR) imagery and the CAMVID dataset with a relatively smaller training dataset.


PLOS ONE | 2017

Deploying a quantum annealing processor to detect tree cover in aerial imagery of California

Edward Boyda; Saikat Basu; Sangram Ganguly; A. R. Michaelis; Supratik Mukhopadhyay; Ramakrishna R. Nemani; Shijo Joseph

Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA.

Collaboration


Dive into the Saikat Basu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Manohar Karki

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Robert DiBiano

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jerry Weltman

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Jaydeep Howlader

National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge