Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vatsal Sharan is active.

Publication


Featured researches published by Vatsal Sharan.


global communications conference | 2013

Energy efficient optimal node-source localization using mobile beacon in ad-hoc sensor networks

Sudhir Kumar; Vatsal Sharan; Rajesh M. Hegde

In this paper, a single mobile beacon based method to localize nodes using principle of maximum power reception is proposed. Optimal positioning of the mobile beacon for minimum energy consumption is also discussed. In contrast to existing methods, the node localization is done with prior location of only three nodes. There is no need of synchronization, as there is only one mobile anchor and each node communicates only with the anchor node. Also, this method is not constrained by a fixed sensor geometry. The localization is done in a distributed fashion, at each sensor node. Experiments on node-source localization are conducted by deploying sensors in an ad-hoc manner in both outdoor and indoor environments. Localization results obtained herein indicate a reasonable performance improvement when compared to conventional methods.


asilomar conference on signals, systems and computers | 2013

Localization of acoustic beacons using iterative null beamforming over ad-hoc wireless sensor networks

Vatsal Sharan; Sudhir Kumar; Rajesh M. Hegde

In this paper an iterative method to localize and separate multiple audio beacons using the principles of null beam forming is proposed. In contrast to standard methods, the source separation is done optimally by putting a null on all the other sources while obtaining an estimate of a particular source. Also, this method is not constrained by fixed sensor geometry as is the case with general beamforming methods. The wireless sensor nodes can therefore be deployed in any random geometry as required. Experiments are performed to estimate the location and also the power spectral density of the separated sources. The experimental results indicate that the method can be used in ad-hoc, flexible and low-cost wireless sensor network deployment.


communication systems and networks | 2013

Multiple source localization using randomly distributed wireless sensor nodes

Vatsal Sharan; Sudhir Kumar; Rajesh M. Hegde

In this paper, we present a method to localize multiple sources using randomly distributed wireless sensor nodes. The principle of maximum power collection is used to obtain the beamforming weights which add the source signal constructively at the sensor outputs. The beamforming weights give the time difference of arrival (TDOA) for each of the sources from which the source location is subsequently computed using a hyperbolic estimator. Results show that the method successfully localizes multiple sources in noise.


symposium on the theory of computing | 2018

Prediction with a short memory

Vatsal Sharan; Sham M. Kakade; Percy Liang; Gregory Valiant

We consider the problem of predicting the next observation given a sequence of past observations, and consider the extent to which accurate prediction requires complex algorithms that explicitly leverage long-range dependencies. Perhaps surprisingly, our positive results show that for a broad class of sequences, there is an algorithm that predicts well on average, and bases its predictions only on the most recent few observation together with a set of simple summary statistics of the past observations. Specifically, we show that for any distribution over observations, if the mutual information between past observations and future observations is upper bounded by I, then a simple Markov model over the most recent I/є observations obtains expected KL error є—and hence ℓ1 error √є—with respect to the optimal predictor that has access to the entire past and knows the data generating distribution. For a Hidden Markov Model with n hidden states, I is bounded by logn, a quantity that does not depend on the mixing time, and we show that the trivial prediction algorithm based on the empirical frequencies of length O(logn/є) windows of observations achieves this error, provided the length of the sequence is dΩ(logn/є), where d is the size of the observation alphabet. We also establish that this result cannot be improved upon, even for the class of HMMs, in the following two senses: First, for HMMs with n hidden states, a window length of logn/є is information-theoretically necessary to achieve expected KL error є, or ℓ1 error √є. Second, the dΘ(logn/є) samples required to accurately estimate the Markov model when observations are drawn from an alphabet of size d is necessary for any computationally tractable learning/prediction algorithm, assuming the hardness of strongly refuting a certain class of CSPs.


international symposium on information theory | 2014

Large deviation property of waiting times for Markov and mixing processes

Vatsal Sharan; Rakesh K. Bansal

In this work, we study the asymptotic properties of the waiting time until the opening string in the realization of a process first appears in an independent realization of the same or a different process. We first establish that the normalized waiting time between two independent realizations of a single source obeys the large deviation property for a class of mixing processes. Using the method of Markov types, we extend the result to when both the sequences are realizations of two distinct irreducible and aperiodic Markov sources.


very large data bases | 2018

Moment-based quantile sketches for efficient high cardinality aggregation queries

Edward Gan; Jialin Ding; Kai Sheng Tai; Vatsal Sharan; Peter Bailis

Interactive analytics increasingly involves querying for quantiles over sub-populations of high cardinality datasets. Data processing engines such as Druid and Spark use mergeable summaries to estimate quantiles, but summary merge times can be a bottleneck during aggregation. We show how a compact and efficiently mergeable quantile sketch can support aggregation workloads. This data structure, which we refer to as the moments sketch, operates with a small memory footprint (200 bytes) and computationally efficient (50ns) merges by tracking only a set of summary statistics, notably the sample moments. We demonstrate how we can efficiently and practically estimate quantiles using the method of moments and the maximum entropy principle, and show how the use of a cascade further improves query time for threshold predicates. Empirical evaluation on real-world datasets shows that the moments sketch can achieve less than 1 percent error with 15 times less merge overhead than comparable summaries, improving end query time in the MacroBase engine by up to 7 times and the Druid engine by up to 60 times.


international conference on management of data | 2018

Sketching Linear Classifiers over Data Streams

Kai Sheng Tai; Vatsal Sharan; Peter Bailis; Gregory Valiant

We introduce a new sub-linear space sketch---the Weight-Median Sketch---for learning compressed linear classifiers over data streams while supporting the efficient recovery of large-magnitude weights in the model. This enables memory-limited execution of several statistical analyses over streams, including online feature selection, streaming data explanation, relative deltoid detection, and streaming estimation of pointwise mutual information. Unlike related sketches that capture the most frequently-occurring features (or items) in a data stream, the Weight-Median Sketch captures the features that are most discriminative of one stream (or class) compared to another. The Weight-Median Sketch adopts the core data structure used in the Count-Sketch, but, instead of sketching counts, it captures sketched gradient updates to the model parameters. We provide a theoretical analysis that establishes recovery guarantees for batch and online learning, and demonstrate empirical improvements in memory-accuracy trade-offs over alternative memory-budgeted methods, including count-based sketches and feature hashing.


international conference on machine learning | 2017

Orthogonalized ALS: A Theoretically Principled Tensor Decomposition Algorithm for Practical Use

Vatsal Sharan; Gregory Valiant


neural information processing systems | 2017

Learning Overcomplete HMMs

Vatsal Sharan; Sham M. Kakade; Percy Liang; Gregory Valiant


neural information processing systems | 2018

A Spectral View of Adversarially Robust Features

Shivam Garg; Vatsal Sharan; Brian Hu Zhang; Gregory Valiant

Collaboration


Dive into the Vatsal Sharan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rajesh M. Hegde

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Sudhir Kumar

Indian Institute of Technology Patna

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sham M. Kakade

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge