Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vanika Singhal is active.

Publication


Featured researches published by Vanika Singhal.


international conference on neural information processing | 2016

Deep Dictionary Learning vs Deep Belief Network vs Stacked Autoencoder: An Empirical Analysis

Vanika Singhal; Anupriya Gogna; Angshul Majumdar

A recent work introduced the concept of deep dictionary learning. The first level is a dictionary learning stage where the inputs are the training data and the outputs are the dictionary and learned coefficients. In subsequent levels of deep dictionary learning, the learned coefficients from the previous level acts as inputs. This is an unsupervised representation learning technique. In this work we empirically compare and contrast with similar deep representation learning techniques – deep belief network and stacked autoencoder. We delve into two aspects; the first one is the robustness of the learning tool in the presence of noise and the second one is the robustness with respect to variations in the number of training samples. The experiments have been carried out on several benchmark datasets. We find that the deep dictionary learning method is the most robust.


Neural Processing Letters | 2018

Majorization Minimization Technique for Optimally Solving Deep Dictionary Learning

Vanika Singhal; Angshul Majumdar

The concept of deep dictionary learning (DDL) has been recently proposed. Unlike shallow dictionary learning which learns single level of dictionary to represent the data, it uses multiple layers of dictionaries. So far, the problem could only be solved in a greedy fashion; this was achieved by learning a single layer of dictionary in each stage where the coefficients from the previous layer acted as inputs to the subsequent layer (only the first layer used the training samples as inputs). This was not optimal; there was feedback from shallower to deeper layers but not the other way. This work proposes an optimal solution to DDL whereby all the layers of dictionaries are solved simultaneously. We employ the Majorization Minimization approach. Experiments have been carried out on benchmark datasets; it shows that optimal learning indeed improves over greedy piecemeal learning. Comparison with other unsupervised deep learning tools (stacked denoising autoencoder, deep belief network, contractive autoencoder and K-sparse autoencoder) show that our method supersedes their performance both in accuracy and speed.


data compression conference | 2017

How to Train Your Neural Network with Dictionary Learning

Vanika Singhal; Shikha Singh; Angshul Majumdar

Currently there are two predominant ways to train deep neuralnetworks. The first one uses restricted Boltzmann machine (RBM) and the second one autoencoders. RBMs are stacked in layers to form deep belief network (DBN), the final representation layer is attached to the target to complete the deep neural network. Autoencoders are nested one inside the other to form stacked autoencoders, once the stcaked autoencoder is learnt the decoder portion is detached and the target attached to the deepest layer of the encoder to form the deep neural network. This work proposes a new approach to train deep neural networks using dictionary learning as the basic building block, the idea is to use the features from the shallower layer as inputs for training the next deeper layer. One can use any type of dictionary learning (unsupervised, supervised, discriminative etc.) as basic units till the pre-final layer. In the final layer one needs to use the label consistent dictionary learning formulation for classification. We compare our proposed framework with existing state-of-the art deep learning techniques on benchmark problems, we are always within the top 10 results. In actual problems of age and gender classification, we are better than the best known techniques.


Proceedings of the Fourth ACM IKDD Conferences on Data Sciences | 2017

Noisy Deep Dictionary Learning

Vanika Singhal; Angshul Majumdar

In a recent work, the concept of deep dictionary learning was proposed. Learning a single level of dictionary is a well researched topic in image processing and computer vision community. In deep dictionary learning, the first level proceeds like standard dictionary learning; in subsequent layers the (scaled) output coefficients from the previous layer are used as inputs for dictionary learning. This is an unsupervised deep learning approach. The features from the final / deepest layer and representations for subsequent analysis and classification. The seminal paper of stacked denoising autoencoders have shown that robust deep models can be learnt when augmented noisy data is used for training stacked autoencoders instead of clean data. We adopt this idea into the deep dictionary learning framework; instead of using only clean data we augment the training dataset by adding noise; this improves robustness. Experimental evaluation on various benchmark datasets on classification and clustering shows that our proposal yields significant improvement.


IEEE Transactions on Geoscience and Remote Sensing | 2017

Discriminative Robust Deep Dictionary Learning for Hyperspectral Image Classification

Vanika Singhal; Hemant Kumar Aggarwal; Snigdha Tariyal; Angshul Majumdar

This paper proposes a new framework for deep learning that has been particularly tailored for hyperspectral image classification. We learn multiple levels of dictionaries in a robust fashion. The last layer is discriminative that learns a linear classifier. The training proceeds greedily; at a time, a single level of dictionary is learned and the coefficients used to train the next level. The coefficients from the final level are used for classification. Robustness is incorporated by minimizing the absolute deviations instead of the more popular Euclidean norm. The inbuilt robustness helps combat mixed noise (Gaussian and sparse) present in hyperspectral images. Results show that our proposed techniques outperform all other deep learning methods—deep belief network, stacked autoencoder, and convolutional neural network. The experiments have been carried out on both benchmark deep learning data sets (MNIST, CIFAR-10, and Street View House Numbers) as well as on real hyperspectral imaging data sets.


international symposium on neural networks | 2017

Class-wise deep dictionary learning

Vanika Singhal; Prerna Khurana; Angshul Majumdar

In this work we propose a new framework for combined feature extraction and classification. The base idea stems from the sparse representation based classification; where in the training samples from each class are assumed to form a basis for representing the same. Later studies learned a basis for each class using dictionary learning; these were shallow techniques where only one level of dictionary was learnt. In this work we propose to learn multiple levels of dictionaries for each class. We test our technique on benchmark deep learning datasets. We compare our proposed method with deep (stacked autoencoder, deep belief network) techniques and shallow (support vector machine and label consistent dictionary learning) techniques; ours yield the best results overall. We also carry out an empirical analysis with perturbations. We find that our method is more robust compared to other deep learning techniques in the presence of different kinds of noise, missing features and varying amounts of training data.


IEEE Access | 2018

Correction to “Semi-Supervised Deep Blind Compressed Sensing for Analysis and Reconstruction of Biomedical Signals From Compressive Measurements”

Vanika Singhal; Angshul Majumdar; Rabab K. Ward

In this paper, the objective is to classify biomedical signals from their compressive measurements. The problem arises when compressed sensing (CS) is used for energy efficient acquisition and transmission of such signals for wireless body area network. After reconstruction, the signal is analyzed via certain machine learning techniques. This paper proposes to carry out joint reconstruction and analysis in a single framework; the reconstruction ability is obtained inherently from our formulation. We put forth a new technique called semi-supervised deep blind CS that combines the analytic power of deep learning with the reconstruction ability of CS. Experimental results on EEG classification show that the proposed technique excels over the state-of-the-art paradigm of CS reconstruction followed by deep learning classification.


international symposium on neural networks | 2017

Noisy deep dictionary learning: Application to Alzheimer's Disease classification

Angshul Majumdar; Vanika Singhal

A recent work introduced the concept of deep dictionary learning. In deep dictionary learning, the first level proceeds like standard dictionary learning; in sub-sequent layers the (scaled) output coefficients from the previous layer are used as inputs for dictionary learning. This is an unsupervised deep learning approach. The features from the final / deepest layer are employed for subsequent analysis and classification. The seminal paper of stacked denoising autoencoders have shown that robust deep models can be learnt when noisy data is used for training stacked autoencoders instead of clean data. We adopt this idea into the deep dictionary learning framework; instead of using only clean data we augment the training dataset by adding noise; this improves robustness. Experimental evaluation on benchmark deep learning datasets and real world problem of AD classification show that our proposal yields considerable improvement.


international symposium on neural networks | 2018

Supervised Deep Dictionary Learning for Single Label and Multi-Label Classification

Vanika Singhal; Angshul Majumdar


IEEE Transactions on Smart Grid | 2018

Simultaneous Detection of Multiple Appliances from Smart-meter Measurements via Multi-Label Consistent Deep Dictionary Learning and Deep Transform Learning

Vanika Singhal; Jyoti Maggu; Angshul Majumdar

Collaboration


Dive into the Vanika Singhal's collaboration.

Top Co-Authors

Avatar

Angshul Majumdar

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Shikha Singh

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Anupriya Gogna

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Hemant Kumar Aggarwal

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Jyoti Maggu

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Prerna Khurana

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Snigdha Tariyal

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Rabab K. Ward

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge