Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rakesh Chalasani is active.

Publication


Featured researches published by Rakesh Chalasani.


international symposium on neural networks | 2013

A fast proximal method for convolutional sparse coding

Rakesh Chalasani; Jose C. Principe; Naveen Ramakrishnan

Sparse coding, an unsupervised feature learning technique, is often used as a basic building block to construct deep networks. Convolutional sparse coding is proposed in the literature to overcome the scalability issues of sparse coding techniques to large images. In this paper we propose an efficient algorithm, based on the fast iterative shrinkage thresholding algorithm (FISTA), for learning sparse convolutional features. Through numerical experiments, we show that the proposed convolutional extension of FISTA can not only lead to faster convergence compared to existing methods but can also easily generalize to other cost functions.


Proceedings of the IEEE | 2014

Cognitive Architectures for Sensory Processing

Jose C. Principe; Rakesh Chalasani

This paper describes our efforts to design a cognitive architecture for object recognition in video. Unlike most efforts in computer vision, our work proposes a Bayesian approach to object recognition in video, using a hierarchical, distributed architecture of dynamic processing elements that learns in a self-organizing way to cluster objects in the video input. A biologically inspired innovation is to implement a top-down pathway across layers in the form of causes, creating effectively a bidirectional processing architecture with feedback. To simplify discrimination, overcomplete representations are utilized. Both inference and parameter learning are performed using empirical priors, while imposing appropriate sparseness constraints. Preliminary results show that the cognitive architecture has features that resemble the functional organization of the early visual cortex. One example showing the use of top-down connections is given to disambiguate a synthetic video from correlated noise.


Neurocomputing | 2015

Self-organizing maps with information theoretic learning

Rakesh Chalasani; Jose C. Principe

Abstract The self-organizing map (SOM) is one of the popular clustering and data visualization algorithms and has evolved as a useful tool in pattern recognition, data mining since it was first introduced by Kohonen. However, it is observed that the magnification factor for such mappings deviates from the information-theoretically optimal value of 1 (for the SOM it is 2/3). This can be attributed to the use of the mean square error to adapt the system, which distorts the mapping by oversampling the low probability regions. In this work, we first discuss the kernel SOM in terms of a similarity measure called correntropy induced metric (CIM) and empirically show that this can enhance the magnification of the mapping without much increase in the computational complexity of the algorithm. We also show that adapting the SOM in the CIM sense is equivalent to reducing the localized cross information potential, an information-theoretic function that quantifies the similarity between two probability distributions. Using this property we propose a kernel bandwidth adaptation algorithm for Gaussian kernels, with both homoscedastic and heteroscedastic components. We show that the proposed model can achieve a mapping with optimal magnification and can automatically adapt the parameters of the kernel function.


international symposium on neural networks | 2010

Self organizing maps with the correntropy induced metric

Rakesh Chalasani; Jose C. Principe

The similarity measure popularly used in Kohonens self organizing maps and several of its other variants is the mean square error (MSE). It is shown that this leads to, in information theoretic sense, a suboptimal solution of distributing the centers of the map. Here we show that using a similarity measure called the correntropy induced metric (CIM) can lead to a solution with better magnification of the input density. It provides an insight into how the type of the kernel effects the mapping and also under what condition is using SOM with CIM (SOM-CIM) can perform better than SOM with MSE. We also show that the use of this in clustering and data visualization can provide better results.


IEEE Transactions on Neural Networks | 2015

Context Dependent Encoding Using Convolutional Dynamic Networks

Rakesh Chalasani; Jose C. Principe

Perception of sensory signals is strongly influenced by their context, both in space and time. In this paper, we propose a novel hierarchical model, called convolutional dynamic networks, that effectively utilizes this contextual information, while inferring the representations of the visual inputs. We build this model based on a predictive coding framework and use the idea of empirical priors to incorporate recurrent and top-down connections. These connections endow the model with contextual information coming from temporal as well as abstract knowledge from higher layers. To perform inference efficiently in this hierarchical model, we rely on a novel scheme based on a smoothing proximal gradient method. When trained on unlabeled video sequences, the model learns a hierarchy of stable attractors, representing low-level to high-level parts of the objects. We demonstrate that the model effectively utilizes contextual information to produce robust and stable representations for object recognition in video sequences, even in case of highly corrupted inputs.


international workshop on machine learning for signal processing | 2012

Temporal context in object recognition

Rakesh Chalasani; Jose C. Principe

Sparse coding has become a popular way to learn feature representation from the data itself. However, temporal context, when present, can provide useful information and alleviate instability in sparse representation. Here we show that when sparse coding is used in conjunction with a dynamical system, the extracted features can provide better descriptors for time-varying observations. We show a marked improvement in classification performance on COIL-100 and animal datasets using our model. We also propose a simple extension to our model to learn invariant representations.


international conference on acoustics, speech, and signal processing | 2014

Dynamic sparse coding with smoothing proximal gradient method

Rakesh Chalasani; Jose C. Principe

In this work we focus on the problem of estimating time-varying sparse signals from a sequence of under-sampled observations. We formulate this problem as estimating hidden states in a dynamic model and exploit the underlying temporal structure to find a more accurate solution, particularly when the information in the observations is at scarce. We propose an optimization procedure based on smoothing proximal gradient method to estimate these hidden states. We show that the proposed model is efficient and more robust to the noise in the system.


international symposium on neural networks | 2011

Sparse analog associative memory via L1-regularization and thresholding

Rakesh Chalasani; Jose C. Principe

The CA3 region of the hippocampus acts as an auto-associative memory and is responsible for the consolidation of episodic memory. Two important characteristics of such a network is the sparsity of the stored patterns and the nonsaturating firing rate dynamics. To construct such a network, here we use a maximum a posteriori based cost function, regularized with L1-norm, to change the internal state of the neurons. Then a linear thresholding function is used to obtain the desired output firing rate. We show how such a model leads to a more biologically reasonable dynamic model which can produce a sparse output and recalls with good accuracy when the network is presented with a corrupted input.


international symposium on neural networks | 2012

Sequential causal estimation and learning from time-varying images

Rakesh Chalasani; Goktug T. Cinar; Jose C. Principe

Dynamic models are used in modeling the perceptual systems with hierarchies. But most of the models assume Gaussian statistics on the underlying causes. In this paper we try to develop a basic building block for these hierarchical models where the causes are assumed to be non-Gaussian. We describe a sequential dual estimation framework for inferring the hidden states and unknown causes/inputs while learning the parameters of the model. It is observed that the algorithm is able to extract bases from the time varying image sequence that resembles receptive fields of the simple cells in V1. In addition, the dynamical model gives us the ability to deconvolve spatial and temporal changes in the image sequence.


arXiv: Learning | 2013

Deep Predictive Coding Networks

Rakesh Chalasani; Jose C. Principe

Collaboration


Dive into the Rakesh Chalasani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge