Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Naveen Ramakrishnan is active.

Publication


Featured researches published by Naveen Ramakrishnan.


international symposium on neural networks | 2013

A fast proximal method for convolutional sparse coding

Rakesh Chalasani; Jose C. Principe; Naveen Ramakrishnan

Sparse coding, an unsupervised feature learning technique, is often used as a basic building block to construct deep networks. Convolutional sparse coding is proposed in the literature to overcome the scalability issues of sparse coding techniques to large images. In this paper we propose an efficient algorithm, based on the fast iterative shrinkage thresholding algorithm (FISTA), for learning sparse convolutional features. Through numerical experiments, we show that the proposed convolutional extension of FISTA can not only lead to faster convergence compared to existing methods but can also easily generalize to other cost functions.


IEEE Journal of Selected Topics in Signal Processing | 2011

Gossip-Based Algorithm for Joint Signature Estimation and Node Calibration in Sensor Networks

Naveen Ramakrishnan; Emre Ertin; Randolph L. Moses

We consider the problem of joint sensor calibration and target signature estimation using distributed measurements over a large-scale sensor network. Specifically, we develop a new Distributed Signature Learning and Node Calibration algorithm (D-SLANC) which simultaneously estimates source signals signature and estimates calibration parameters local to each sensor node. We model the sensor network as a connected graph and make use of the gossip-based distributed consensus to update the estimates at each iteration of the algorithm. The algorithm is robust to link and node failures. We prove convergence of the algorithm to the centralized data pooling solution. We compare performance with the Cramér-Rao bound, and study the scaling performance of both the CR bound and the D-SLANC algorithm. The algorithm has application to classification, target signature estimation, and blind calibration in large sensor networks.


international conference on big data | 2015

ADMM based scalable machine learning on Spark

Sauptik Dhar; Congrui Yi; Naveen Ramakrishnan; Mohak Shah

Most machine learning algorithms involve solving a convex optimization problem. Traditional in-memory convex optimization solvers do not scale well with the increase in data. This paper identifies a generic convex problem for most machine learning algorithms and solves it using the Alternating Direction Method of Multipliers (ADMM). Finally such an ADMM problem transforms to an iterative system of linear equations, which can be easily solved at scale in a distributed fashion. We implement this framework in Apache Spark and compare it with the widely used Machine Learning LIBrary (MLLIB) in Apache Spark 1.3.


international conference on big data | 2015

Distributed dynamic elastic nets: A scalable approach for regularization in dynamic manufacturing environments

Naveen Ramakrishnan; Rumi Ghosh

In this paper, we focus on the task of learning influential parameters under unsteady and dynamic environments. Such unsteady and dynamic environments often occur in the ramp-up phase of manufacturing. We propose a novel regularization-based framework, called Distributed Dynamic Elastic Nets (DDEN), for this problem and formulate it as a convex optimization objective. Our approach solves the optimization problem using a distributed framework. Consequently it is highly scalable and can easily be applied to very large datasets. We implement a L-BFGS based solver in the Apache Spark framework. For validating our algorithm, we consider the issue of scrap reduction at an assembly line during the ramp-up phase of a manufacturing plant. By considering the logistic regression as a sample model, we evaluate the performance of our approach although extensions of the proposed regularizer to other classification and regression techniques is straightforward. Through experiments on data collected at a functioning manufacturing plant, we show that the proposed method not only reduces model variance but also helps preserve the relative importance of features in dynamic conditions compared to standard approaches. The experiments further show that the classification performance of DDEN if often better than logistic regression with standard elastic nets for datasets from dynamic and unsteady environments. We are collaborating with manufacturing units to use this process for improving production yields during the ramp-up phase. This work serves as a demonstration of how data mining can be used to solve problem in manufacturing.


international conference on acoustics, speech, and signal processing | 2017

Deep learning on symbolic representations for large-scale heterogeneous time-series event prediction

Shengdong Zhang; Soheil Bahrampour; Naveen Ramakrishnan; Lukas Schott; Mohak Shah

In this paper, we consider the problem of event prediction with multi-variate time series data consisting of heterogeneous (continuous and categorical) variables. The complex dependencies between the variables combined with asynchronicity and sparsity of the data makes the event prediction problem particularly challenging. Most state-of-art approaches address this either by designing hand-engineered features or breaking up the problem over homogeneous variates. In this work, we formulate the (rare) event prediction task as a classification problem with a novel asymmetric loss function and propose an end-to-end deep learning algorithm over symbolic representations of time-series. Symbolic representations are fed into an embedding layer and a Long Short Term Memory Neural Network (LSTM) layer which are trained to learn discriminative features. We also propose a simple sequence chopping technique to speed-up the training of LSTM for long temporal sequences. Experiments on real-world industrial datasets demonstrate the effectiveness of the proposed approach.


arXiv: Learning | 2015

Comparative Study of Caffe, Neon, Theano, and Torch for Deep Learning

Soheil Bahrampour; Naveen Ramakrishnan; Lukas Schott; Mohak Shah


arXiv: Learning | 2016

Comparative Study of Deep Learning Software Frameworks

Soheil Bahrampour; Naveen Ramakrishnan; Lukas Schott; Mohak Shah


Archive | 2012

System And Method For Detection Of High-Interest Events In Video Data

Naveen Ramakrishnan; Iftekhar Naim


Archive | 2014

Personal emergency response system by nonintrusive load monitoring

Roland Klinnert; Naveen Ramakrishnan; Michael Dambier; Felix Maus; Diego Benitez


Archive | 2011

Method for Unsupervised Non-Intrusive Load Monitoring

Naveen Ramakrishnan; Diego Benitez

Collaboration


Dive into the Naveen Ramakrishnan's collaboration.

Researchain Logo
Decentralizing Knowledge