Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manohar Karki is active.

Publication


Featured researches published by Manohar Karki.


advances in geographic information systems | 2015

DeepSat: a learning framework for satellite imagery

Saikat Basu; Sangram Ganguly; Supratik Mukhopadhyay; Robert DiBiano; Manohar Karki; Ramakrishna R. Nemani

Satellite image classification is a challenging problem that lies at the crossroads of remote sensing, computer vision, and machine learning. Due to the high variability inherent in satellite data, most of the current object classification approaches are not suitable for handling satellite datasets. The progress of satellite image analytics has also been inhibited by the lack of a single labeled high-resolution dataset with multiple class labels. The contributions of this paper are twofold -- (1) first, we present two new satellite datasets called SAT-4 and SAT-6, and (2) then, we propose a classification framework that extracts features from an input image, normalizes them and feeds the normalized feature vectors to a Deep Belief Network for classification. On the SAT-4 dataset, our best network produces a classification accuracy of 97.95% and outperforms three state-of-the-art object recognition algorithms, namely - Deep Belief Networks, Convolutional Neural Networks and Stacked Denoising Autoencoders by ~11%. On SAT-6, it produces a classification accuracy of 93.9% and outperforms the other algorithms by ~15%. Comparative studies with a Random Forest classifier show the advantage of an unsupervised learning approach over traditional supervised learning techniques. A statistical analysis based on Distribution Separability Criterion and Intrinsic Dimensionality Estimation substantiates the effectiveness of our approach in learning better representations for satellite imagery.


IEEE Transactions on Geoscience and Remote Sensing | 2015

A Semiautomated Probabilistic Framework for Tree-Cover Delineation From 1-m NAIP Imagery Using a High-Performance Computing Architecture

Saikat Basu; Sangram Ganguly; Ramakrishna R. Nemani; Supratik Mukhopadhyay; Gong Zhang; Cristina Milesi; A. R. Michaelis; Petr Votava; Ralph Dubayah; Laura Duncanson; Bruce D. Cook; Yifan Yu; Sassan Saatchi; Robert DiBiano; Manohar Karki; Edward Boyda; Uttam Kumar; Shuang Li

Accurate tree-cover estimates are useful in deriving above-ground biomass density estimates from very high resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree-cover delineation in high-to-coarse-resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR data sets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree-cover estimates for the whole of Continental United States, using a high-performance computing architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on conditional random field, which helps in capturing the higher order contextual dependence relations between neighboring pixels. Once the final probability maps are generated, the framework is updated and retrained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates (FPRs). The tree-cover maps were generated for the state of California, which covers a total of 11 095 NAIP tiles and spans a total geographical area of 163 696 sq. miles. Our framework produced correct detection rates of around 88% for fragmented forests and 74% for urban tree-cover areas, with FPRs lower than 2% for both regions. Comparative studies with the National Land-Cover Data algorithm and the LiDAR high-resolution canopy height model showed the effectiveness of our algorithm for generating accurate high-resolution tree-cover maps.


Neural Processing Letters | 2017

Learning Sparse Feature Representations Using Probabilistic Quadtrees and Deep Belief Nets

Saikat Basu; Manohar Karki; Sangram Ganguly; Robert DiBiano; Supratik Mukhopadhyay; Shreekant Gayaka; Rajgopal Kannan; Ramakrishna R. Nemani

Learning sparse feature representations is a useful instrument for solving an unsupervised learning problem. In this paper, we present three labeled handwritten digit datasets, collectively called n-MNIST by adding noise to the MNIST dataset, and three labeled datasets formed by adding noise to the offline Bangla numeral database. Then we propose a novel framework for the classification of handwritten digits that learns sparse representations using probabilistic quadtrees and Deep Belief Nets. On the MNIST, n-MNIST and noisy Bangla datasets, our framework shows promising results and outperforms traditional Deep Belief Networks.


international joint conference on neural network | 2016

A theoretical analysis of Deep Neural Networks for texture classification.

Saikat Basu; Manohar Karki; Supratik Mukhopadhyay; Sangram Ganguly; Ramakrishna R. Nemani; Robert DiBiano; Shreekant Gayaka

We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity.


Neural Networks | 2018

Deep neural networks for texture classification—A theoretical analysis

Saikat Basu; Supratik Mukhopadhyay; Manohar Karki; Robert DiBiano; Sangram Ganguly; Ramakrishna R. Nemani; Shreekant Gayaka

We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity.


international conference on artificial neural networks | 2017

Core Sampling Framework for Pixel Classification

Manohar Karki; Robert DiBiano; Saikat Basu; Supratik Mukhopadhyay

The intermediate map responses of a Convolutional Neural Network (CNN) contain contextual knowledge about its input. In this paper, we present a framework that uses these activation maps from several layers of a CNN as features to a Deep Belief Network (DBN) using transfer learning to provide an understanding of an input image. We create a representation of these features and the training data and use them to extract more information from an image at the pixel level, hence gaining understanding of the whole image. We experimentally demonstrate the usefulness of our framework using a pretrained model and use a DBN to perform segmentation on the BAERI dataset of Synthetic Aperture Radar (SAR) imagery and the CAMVID dataset with a relatively smaller training dataset.


computer software and applications conference | 2015

An Agile Framework for Real-Time Motion Tracking

Saikat Basu; Robert DiBiano; Manohar Karki; Malcolm Stagg; Jerry Weltman; Supratik Mukhopadhyay; Sangram Ganguly

We present an agile framework for automated tracking of moving objects in full motion video (FMV). The framework is robust, being able to track multiple foreground objects of different types (e.g., Person, vehicle) having disparate motion characteristics (like speed, uniformity) simultaneously in real time under changing lighting conditions, background, and disparate dynamics of the camera. It is able to start tracks automatically based on a confidence-based spatio-temporal filtering algorithm and is able to follow objects through occlusions. Unlike existing tracking algorithms, with high likelihood, it does not lose or switch tracks while following multiple similar closely-spaced objects. The framework is based on an ensemble of tracking algorithms that are switched automatically for optimal performance based on a performance measure without losing state. Only one of the algorithms, that has the best performance in a particular state is active at any time providing computational advantages over existing ensemble frameworks like boosting. A C++ implementation of the framework has outperformed existing visual tracking algorithms on most videos in the Video Image Retrieval and Analysis Tool (VIRAT: www.viratdata.org) and the Tracking-Learning-Detection data-sets.


ieee symposium series on computational intelligence | 2016

A symbolic framework for recognizing activities in full motion surveillance videos

Manohar Karki; Saikat Basu; Robert DiBiano; Supratik Mukhopadhyay; Jerry Weltman; Malcolm Stagg

We present a symbolic framework for recognizing activities of interest in real time from video streams automatically. This framework uses regular expressions to symbolically represent (possibly infinite) sets of motion characteristics obtained from a video. It uniformly handles both trajectory-based and periodic articulated activities and provides polynomial time graph algorithms for fast recognition. The regular expressions representing motion characteristics can either be provided manually or learnt automatically from positive and negative examples of strings (that describe dynamic behavior) using offline automata learning frameworks. Confidence measures are associated with recognition using Levenshtein distance between a string representing a motion signature and the regular expression describing an activity. We have used our framework to recognize trajectory-based activities like vehicle turns (U-turns, left and right turns, and K-turns), vehicle start and stop, a person running and walking, and periodic articulated activities like hand waving, boxing, hand clapping and digging in videos from the VIRAT public dataset, the KTH dataset, and a set of videos obtained from YouTube. Our framework is fast (it runs at nearly 3 times real time) and on the KTH dataset, it is shown to outperform three of the latest existing approaches.


international conference on computer vision theory and applications | 2015

MAPTrack - A Probabilistic Real Time Tracking Framework by Integrating Motion, Appearance and Position Models

Saikat Basu; Manohar Karki; Malcolm Stagg; Robert DiBiano; Sangram Ganguly; Supratik Mukhopadhyay

In this paper, we present MAPTrack a robust tracking framework that uses a probabilistic scheme to combine a motion model of an object with that of its appearance and an estimation of its position. The motion of the object is modelled using the Gaussian Mixture Background Subtraction algorithm, the appearance of the tracked object is enumerated using a color histogram and the projected location of the tracked object in the image space/frame sequence is computed by applying a Gaussian to the Region of Interest. Our tracking framework is robust to abrupt changes in lighting conditions, can follow an object through occlusions, and can simultaneously track multiple moving foreground objects of different types (e.g., vehicles, human, etc.) even when they are closely spaced. It is able to start tracks automatically based on a spatio-temporal filtering algorithm. A “dynamic” integration of the framework with optical flow allows us to track videos resulting from significant camera motion. A C++ implementation of the framework has outperformed existing visual tracking algorithms on most videos in the Video Image Retrieval and Analysis Tool (VIRAT), TUD, and the Tracking-Learning-Detection (TLD) datasets.


Archive | 2012

An Agile Framework for Real-Time Visual Tracking in Videos

Saikat Basu; Malcolm Stagg; Robert DiBiano; Manohar Karki; Supratik Mukhopadhyay; Jerry Weltman

Collaboration


Dive into the Manohar Karki's collaboration.

Top Co-Authors

Avatar

Robert DiBiano

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Saikat Basu

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jerry Weltman

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruce D. Cook

Goddard Space Flight Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge