Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jayanta K. Dutta is active.

Publication


Featured researches published by Jayanta K. Dutta.


Neurocomputing | 2014

SELP: A general-purpose framework for learning the norms from saliencies in spatiotemporal data

Bonny Banerjee; Jayanta K. Dutta

Abstract Sensors that monitor around the clock are everywhere. Due to the sheer amount of data these sensors can generate, the resources required to store, protect personal information, and analyze them are enormous. Since noteworthy events happen only occasionally, it is not necessary to store or analyze the data generated at every instant of time. Rather, it is imperative for a smart memory to learn the norms in such data so that only the abnormal (or salient) events may be stored. We present a general-purpose biologically plausible computational framework, called SELP, for learning the norms (or invariances) as a hierarchy of features from space- and time-varying data in an unsupervised and online manner from saliencies or surprises in the data. Given streaming data, this framework runs a relentless cycle – detect unexpected or S alient event, E xplain the salient event, L earn from its explanation, P redict the future events – involving the real external world and its internal model, and hence the name. Experimental results from different functions of this framework are presented with a particular emphasis on the role of lateral connections in each layer.


international conference on big data | 2013

Hierarchical feature learning from sensorial data by spherical clustering

Bonny Banerjee; Jayanta K. Dutta

Surveillance sensors are a major source of unstructured Big Data. Discovering and recognizing spatiotemporal objects (e.g., events) in such data is of paramount importance to the security and safety of facilities and individuals. What kind of computational model is necessary for discovering spatiotemporal objects at the level of abstraction they occur? Hierarchical invariant feature learning is the crux to the problems of discovery and recognition in Big Data. We present a multilayered convergent neural architecture for storing repeating spatially and temporally coincident patterns in data at multiple levels of abstraction. A node is the canonical computational unit consisting of neurons. Neurons are connected in and across nodes via bottom-up, top-down and lateral connections. The bottom-up weights are learned to encode a hierarchy of overcomplete and sparse feature dictionaries from space- and time-varying sensorial data by recursive layer-by-layer spherical clustering. The model scales to full-sized high-dimensional input data and also to an arbitrary number of layers thereby having the capability to capture features at any level of abstraction. The model is fully-learnable with only two manually tunable parameters. The model is generalpurpose (i.e., there is no modality-specific assumption for any spatiotemporal data), unsupervised and online. We use the learning algorithm, without any alteration, to learn meaningful feature hierarchies from images and videos which can then be used for recognition. Besides being online, operations in each layer of the model can be implemented in parallelized hardware, making it very efficient for real world Big Data applications.


international conference on big data | 2013

Efficient learning from explanation of prediction errors in streaming data

Bonny Banerjee; Jayanta K. Dutta

Streaming data from different kinds of sensors contributes to Big Data in a significant way. Recognizing the norms and abnormalities in such spatiotemporal data is a challenging problem. We present a general-purpose biologically-plausible computational model, called SELP, for learning the norms or invariances as features in an unsupervised and online manner from explanations of saliencies or surprises in the data. Given streaming data, this model runs a relentless cycle of Surprise → Explain → Learn → Predict involving the real external world and its internal model, and hence the name. The key characteristic of the model is its efficiency, crucial for streaming Big Data applications, which stems from two functionalities exploited at each sampling instant - it operates on the change in the state of data between consecutive sampling instants as opposed to the entire state of data, and it learns only from surprise or prediction error to update its internal state as opposed to learning from the entire input. The former allows the model to concentrate its computational resources on spatial regions of the data changing most frequently and ignore others, while the latter allows it to concentrate on those instants of time when its prediction is erroneous and ignore others. The model is implemented in a neural network architecture. We show the performance of the network in learning and retaining sequences of handwritten numerals. When exposed to natural videos acquired by a camera mounted on a cats head, the neurons learn receptive fields resembling simple cells in the primary visual cortex. The model leads to an agent-dependent framework for mining streaming data where the agent interprets and learns from the data in order to update its internal model.


international conference on data mining | 2013

An Online Clustering Algorithm That Ignores Outliers: Application to Hierarchical Feature Learning from Sensory Data

Bonny Banerjee; Jayanta K. Dutta

Surveillance sensors are a major source of unstructured Big Data. Discovering and recognizing spatiotemporal objects (e.g., events) in such data is of paramount importance to the security and safety of facilities and individuals. Hierarchical feature learning is at the crux to the problems of discovery and recognition. We present a multilayered convergent neural architecture for storing repeating spatially and temporally coincident patterns in data at multiple levels of abstraction. The bottom-up weights in each layer are learned to encode a hierarchy of over complete and sparse feature dictionaries from space- and time-varying sensory data by recursive layer-by-layer spherical clustering. This density-based clustering algorithm ignores outliers by the use of a unique adaptive threshold in each neurons transfer function. The model scales to full-sized high-dimensional input data and also to an arbitrary number of layers, thereby possessing the capability to capture features at any level of abstraction. It is fully-learnable with only two manually tunable parameters. The model was deployed to learn meaningful feature hierarchies from audio, images and videos which can then be used for recognition and reconstruction. Besides being online, operations in each layer of the model can be implemented in parallelized hardware, making it very efficient for real world Big Data applications.


international conference on data mining | 2013

A Predictive Coding Framework for Learning to Predict Changes in Streaming Data

Bonny Banerjee; Jayanta K. Dutta

Streaming sensorial data poses major computational challenges, such as, lack of storage, inapplicability of offline algorithms, and the necessity to capture nonstationary data distributions with concept drifts. Our goal is to build a learner framework that uses the current data and the knowledge from historical data to predict the next data in an efficient, unsupervised and online manner. Labeled streaming data is scarce, hence prediction of data instead of labels is a more realistic problem. We present a learner model, called SELP, for learning in variances as features from explanations of surprises due to prediction errors in streaming spatiotemporal data. This model runs a relentless cycle of Surprise → Explain → Learn → Predict involving the real external world and its internal model. The learner is continuously updated, independent of a trigger, proportional to its surprise. It implements a more efficient version of predictive coding, a form of biologically-plausible information coding paradigm, by predicting changes in the data instead of the data itself. Experimental results obtained from deploying our implementation on synthesized and real-world data are qualitatively comparable to that of traditional predictive coding on similar data sets. The results also offer insights into the learner design. This research lays out the foundations for an agent-based framework with an internal model grounded to the data stream.


IEEE Transactions on Knowledge and Data Engineering | 2016

RODS: Rarity based Outlier Detection in a Sparse Coding Framework

Jayanta K. Dutta; Bonny Banerjee; Chandan K. Reddy

Outlier detection has been an active area of research for a few decades. We propose a new definition of outlier that is useful for high-dimensional data. According to this definition, given a dictionary of atoms learned using the sparse coding objective, the outlierness of a data point depends jointly on two factors: the frequency of each atom in reconstructing all data points (or its negative log activity ratio, NLAR) and the strength by which it is used in reconstructing the current point. A Rarity based Outlier Detection algorithm in a Sparse coding framework (RODS) that consists of two components, NLAR learning and outlier scoring, is developed. This algorithm is unsupervised; both the offline and online variants are presented. It is governed by very few manually-tunable parameters and operates in linear time. We demonstrate the superior performance of the RODS in comparison with various state-of-the-art outlier detection algorithms on several benchmark datasets. We also demonstrate its effectiveness using three real-world case studies: saliency detection in images, abnormal event detection in videos, and change detection in data streams. Our evaluations shows that RODS outperforms competing algorithms reported in the outlier detection, saliency detection, video event detection, and change detection literature.


conference of the international speech communication association | 2016

Identifying Hearing Loss from Learned Speech Kernels.

Shamima Najnin; Bonny Banerjee; Lisa Lucks Mendel; Masoumeh Heidari Kapourchali; Jayanta K. Dutta; Sungmin Lee; Chhayakanta Patro; Monique Pousson

Does a hearing-impaired individual’s speech reflect his hearing loss? To investigate this question, we recorded at least four hours of speech data from each of 29 adult individuals, both male and female, belonging to four classes: 3 normal, and 26 severely-to-profoundly hearing impaired with high, medium or low speech intelligibility. Acoustic kernels were learned for each individual by capturing the distribution of his speech data points represented as 20 ms duration windows. These kernels were evaluated using a set of neurophysiological metrics, namely, distribution of characteristic frequencies, equal loudness contour, bandwidth andQ10 value of tuning curve. It turns out that, for our cohort, a feature vector can be constructed out of four properties of these metrics that would accurately classify hearing-impaired individuals with low intelligible speech from normal ones using a linear classifier. However, the overlap in the feature space between normal and hearing-impaired individuals increases as the speech becomes more intelligible. We conclude that a hearing-impaired individual’s speech does reflect his hearing loss provided his loss of hearing has considerably affected the intelligibility of his speech.


international symposium on neural networks | 2017

Variation in classification accuracy with number of glimpses

Jayanta K. Dutta; Bonny Banerjee

We consider an attention-based model that recognizes objects via a sequence of glimpses, and analyze the variation in classification accuracy with the number of glimpses. The problem of object recognition is formulated as a partially observable Markov decision process where the environment is partially observable and glimpses are actions. We show that voting from random attentional policies provides good classification accuracy if the objects in the images are aligned and of similar size. We also show that accuracy does not improve after a certain number of glimpses and sometimes decreases with more glimpses if multiple categories have similar structure. Finally, there are in general several sub-optimal policies for an object to be classified correctly, hence computing the optimal policy by solving an intractable problem is avoidable.


2014 IEEE Symposium on Computational Intelligence in Dynamic and Uncertain Environments (CIDUE) | 2014

Learning features and their transformations from natural videos

Jayanta K. Dutta; Bonny Banerjee

Learning features invariant to arbitrary transformations in the data is a requirement for any recognition system, biological or artificial. It is now widely accepted that simple cells in the primary visual cortex respond to features while the complex cells respond to features invariant to different transformations. We present a novel two-layered feedforward neural model that learns features in the first layer by spatial spherical clustering and invariance to transformations in the second layer by temporal spherical clustering. Learning occurs in an online and unsupervised manner following the Hebbian rule. When exposed to natural videos acquired by a camera mounted on a cats head, the first and second layer neurons in our model develop simple and complex cell-like receptive field properties. The model can predict by learning lateral connections among the first layer neurons. A topographic map to their spatial features emerges by exponentially decaying the flow of activation with distance from one neuron to another in the first layer that fire in close temporal proximity, thereby minimizing the pooling length in an online manner simultaneously with feature learning.


2014 IEEE Symposium on Computational Intelligence in Brain Computer Interfaces (CIBCI) | 2014

Abnormal event detection in EEG imaging - Comparing predictive and model-based approaches

Jayanta K. Dutta; Bonny Banerjee; Roman Ilin; Robert Kozma

The detection of abnormal/unusual events based on dynamically varying spatial data has been of great interest in many real world applications. It is a challenging task to detect abnormal events as they occur rarely and it is very difficult to predict or reconstruct them. Here we address the issue of the detection of propagating phase gradient in the sequence of brain images obtained by EEG arrays. We compare two alternative methods of abnormal event detection. One is based on prediction using a linear dynamical system, while the other is a model-based algorithm using expectation minimization approach. The comparison identifies the pros and cons of the different methods, moreover it helps to develop an integrated and robust algorithm for monitoring cognitive behaviors, with potential applications including brain-computer interfaces (BCI).

Collaboration


Dive into the Jayanta K. Dutta's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roman Ilin

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge