Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Akshay Soni is active.

Publication


Featured researches published by Akshay Soni.


IEEE Transactions on Information Theory | 2014

On the Fundamental Limits of Recovering Tree Sparse Vectors From Noisy Linear Measurements

Akshay Soni; Jarvis D. Haupt

Recent breakthrough results in compressive sensing (CS) have established that many high dimensional signals can be accurately recovered from a relatively small number of non-adaptive linear observations, provided that the signals possess a sparse representation in some basis. Subsequent efforts have shown that the performance of CS can be improved by exploiting additional structure in the locations of the nonzero signal coefficients during inference or by utilizing some form of data-dependent adaptive measurement focusing during the sensing process. To the best of our knowledge, our own previous work was the first to establish the potential benefits that can be achieved when fusing the notions of adaptive sensing and structured sparsity. In that work, we examined the task of support recovery from noisy linear measurements, and established that an adaptive sensing strategy specifically tailored to signals that are tree-sparse can significantly outperform adaptive and non-adaptive sensing strategies that are agnostic to the underlying structure. In this paper, we establish fundamental performance limits for the task of support recovery of tree-sparse signals from noisy measurements, in settings where measurements may be obtained either non-adaptively (using a randomized Gaussian measurement strategy motivated by initial CS investigations) or by any adaptive sensing strategy. Our main results here imply that the adaptive tree sensing procedure analyzed in our previous work is nearly optimal, in the sense that no other sensing and estimation strategy can perform fundamentally better for identifying the support of tree-sparse signals.


asilomar conference on signals, systems and computers | 2011

Efficient adaptive compressive sensing using sparse hierarchical learned dictionaries

Akshay Soni; Jarvis D. Haupt

Recent breakthrough results in compressed sensing (CS) have established that many high dimensional objects can be accurately recovered from a relatively small number of non-adaptive linear projection observations, provided that the objects possess a sparse representation in some basis. Subsequent efforts have shown that the performance of CS can be improved by exploiting the structure in the location of the non-zero signal coefficients (structured sparsity) or using some form of online measurement focusing (adaptivity) in the sensing process. In this paper we examine a powerful hybrid of these two techniques. First, we describe a simple adaptive sensing procedure and show that it is a provably effective method for acquiring sparse signals that exhibit structured sparsity characterized by tree-based coefficient dependencies. Next, employing techniques from sparse hierarchical dictionary learning, we show that representations exhibiting the appropriate form of structured sparsity can be learned from collections of training data. The combination of these techniques results in an effective and efficient adaptive compressive acquisition procedure.


IEEE Transactions on Information Theory | 2016

Noisy Matrix Completion Under Sparse Factor Models

Akshay Soni; Swayambhoo Jain; Jarvis D. Haupt; Stefano Gonella

This paper examines a general class of noisy matrix completion tasks, where the goal is to estimate a matrix from observations obtained at a subset of its entries, each of which is subject to random noise or corruption. Our specific focus is on settings where the matrix to be estimated is well-approximated by a product of two (a priori unknown) matrices, one of which is sparse. Such structural models-referred to here as sparse factor models-have been widely used, for example, in subspace clustering applications, as well as in contemporary sparse modeling and dictionary learning tasks. Our main theoretical contributions are estimation error bounds for sparsity-regularized maximum likelihood estimators for the problems of this form, which are applicable to a number of different observation noise or corruption models. Several specific implications are examined, including scenarios where observations are corrupted by additive Gaussian noise or additive heavier-tailed (Laplace) noise, Poisson-distributed observations, and highly quantized (e.g., 1 b) observations. We also propose a simple algorithmic approach based on the alternating direction method of multipliers for these tasks, and provide experimental evidence to support our error analyses.


international symposium on information theory | 2014

Estimation error guarantees for Poisson denoising with sparse and structured dictionary models

Akshay Soni; Jarvis D. Haupt

Poisson processes are commonly used models for describing discrete arrival phenomena arising, for example, in photon-limited scenarios in low-light and infrared imaging, astronomy, and nuclear medicine applications. In this context, several recent efforts have evaluated Poisson denoising methods that utilize contemporary sparse modeling and dictionary learning techniques designed to exploit and leverage (local) shared structure in the images being estimated. This paper establishes a theoretical foundation for such procedures. Specifically, we formulate sparse and structured dictionary-based Poisson denoising methods as constrained maximum likelihood estimation strategies, and establish performance bounds for their mean-square estimation error using the framework of complexity penalized maximum likelihood analyses.


international conference on acoustics, speech, and signal processing | 2012

Learning sparse representations for adaptive compressive sensing

Akshay Soni; Jarvis D. Haupt

Breakthrough results in compressive sensing (CS) have shown that high dimensional signals (vectors) can often be accurately recovered from a relatively small number of non-adaptive linear projection observations, provided that they possess a sparse representation in some basis. Subsequent efforts have established that the reconstruction performance of CS can be improved by employing additional prior signal knowledge, such as dependency in the location of the non-zero signal coefficients (structured sparsity) or by collecting measurements sequentially and adaptively, in order to focus measurements into the proper subspace where the unknown signal resides. In this paper, we examine a powerful hybrid of adaptivity and structure. We identify a particular form of structured sparsity that is amenable to adaptive sensing, and using concepts from sparse hierarchical dictionary learning we demonstrate that sparsifying dictionaries exhibiting the appropriate form of structured sparsity can be learned from a collection of training data. The combination of these techniques (structured dictionary learning and adaptive sensing) results in an effective and efficient adaptive compressive acquisition approach which we refer to as LASeR (Learning Adaptive Sensing Representations).


asilomar conference on signals, systems and computers | 2013

Compressive measurement designs for estimating structured signals in structured clutter: A Bayesian Experimental Design approach

Swayambhoo Jain; Akshay Soni; Jarvis D. Haupt

This work considers an estimation task in compressive sensing, where the goal is to estimate an unknown signal from compressive measurements that are corrupted by additive pre-measurement noise (interference, or “clutter”) as well as post-measurement noise, in the specific setting where some (perhaps limited) prior knowledge on the signal, interference, and noise is available. The specific aim here is to devise a strategy for incorporating this prior information into the design of an appropriate compressive measurement strategy. Here, the prior information is interpreted as statistics of a prior distribution on the relevant quantities, and an approach based on Bayesian Experimental Design is proposed. Experimental results on synthetic data demonstrate that the proposed approach outperforms traditional random compressive measurement designs, which are agnostic to the prior information, as well as several other knowledge-enhanced sensing matrix designs based on more heuristic notions.


ieee global conference on signal and information processing | 2014

Error bounds for maximum likelihood matrix completion under sparse factor models

Akshay Soni; Swayambhoo Jain; Jarvis D. Haupt; Stefano Gonella

This paper examines a general class of matrix completion tasks where entry wise observations of the matrix are subject to random noise or corruption. Our particular focus here is on settings where the matrix to be estimated follows a sparse factor model, in the sense that it may be expressed as the product of two matrices, one of which is sparse. We analyze the performance of a sparsity-penalized maximum likelihood approach to such problems to provide a general-purpose estimation result applicable to any of a number of noise/corruption models, and describe its implications in two stylized scenarios - one characterized by additive Gaussian noise, and the other by highly-quantized one-bit observations. We also provide some supporting empirical evidence to validate our theoretical claims in the Gaussian setting.


meeting of the association for computational linguistics | 2017

DocTag2Vec: An Embedding Based Multi-label Learning Approach for Document Tagging.

Sheng Chen; Akshay Soni; Aasish Pappu; Yashar Mehdad

Tagging news articles or blog posts with relevant tags from a collection of predefined ones is coined as document tagging in this work. Accurate tagging of articles can benefit several downstream applications such as recommendation and search. In this work, we propose a novel yet simple approach called DocTag2Vec to accomplish this task. We substantially extend Word2Vec and Doc2Vec---two popular models for learning distributed representation of words and documents. In DocTag2Vec, we simultaneously learn the representation of words, documents, and tags in a joint vector space during training, and employ the simple


knowledge discovery and data mining | 2017

Online Ranking with Constraints: A Primal-Dual Algorithm and Applications to Web Traffic-Shaping

Parikshit Shah; Akshay Soni; Troy Chevalier

k


international symposium on information theory | 2017

Noisy inductive matrix completion under sparse factor models

Akshay Soni; Troy Chevalier; Swayambhoo Jain

-nearest neighbor search to predict tags for unseen documents. In contrast to previous multi-label learning methods, DocTag2Vec directly deals with raw text instead of provided feature vector, and in addition, enjoys advantages like the learning of tag representation, and the ability of handling newly created tags. To demonstrate the effectiveness of our approach, we conduct experiments on several datasets and show promising results against state-of-the-art methods.

Collaboration


Dive into the Akshay Soni's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aasish Pappu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge