Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gautam Kamath is active.

Publication


Featured researches published by Gautam Kamath.


foundations of computer science | 2016

Robust Estimators in High Dimensions without the Computational Intractability

Ilias Diakonikolas; Gautam Kamath; Daniel M. Kane; Jerry Zheng Li; Ankur Moitra; Alistair Stewart

We study high-dimensional distribution learning in an agnostic setting where an adversary is allowed to arbitrarily corrupt an epsilon fraction of the samples. Such questions have a rich history spanning statistics, machine learning and theoretical computer science. Even in the most basic settings, the only known approaches are either computationally inefficient or lose dimension dependent factors in their error guarantees. This raises the following question: Is high-dimensional agnostic distribution learning even possible, algorithmically? In this work, we obtain the first computationally efficient algorithms for agnostically learning several fundamental classes of high-dimensional distributions: (1) a single Gaussian, (2) a product distribution on the hypercube, (3) mixtures of two product distributions (under a natural balancedness condition), and (4) mixtures of k Gaussians with identical spherical covariances. All our algorithms achieve error that is independent of the dimension, and in many cases depends nearly-linearly on the fraction of adversarially corrupted samples. Moreover, we develop a general recipe for detecting and correcting corruptions in high-dimensions, that may be applicable to many other problems.


symposium on the theory of computing | 2016

A size-free CLT for poisson multinomials and its applications

Constantinos Daskalakis; Anindya De; Gautam Kamath; Christos Tzamos

An (n,k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of n independent random vectors supported on the set Bk={e1,…,ek} of standard basis vectors in ℝk. We show that any (n,k)-PMD is poly(k/σ)-close in total variation distance to the (appropriately discretized) multi-dimensional Gaussian with the same first two moments, removing the dependence on n from the Central Limit Theorem of Valiant and Valiant. Interestingly, our CLT is obtained by bootstrapping the Valiant-Valiant CLT itself through the structural characterization of PMDs shown in recent work by Daskalakis, Kamath and Tzamos. In turn, our stronger CLT can be leveraged to obtain an efficient PTAS for approximate Nash equilibria in anonymous games, significantly improving the state of the art, and matching qualitatively the running time dependence on n and 1/є of the best known algorithm for two-strategy anonymous games. Our new CLT also enables the construction of covers for the set of (n,k)-PMDs, which are proper and whose size is shown to be essentially optimal. Our cover construction combines our CLT with the Shapley-Folkman theorem and recent sparsification results for Laplacian matrices by Batson, Spielman, and Srivastava. Our cover size lower bound is based on an algebraic geometric construction. Finally, leveraging the structural properties of the Fourier spectrum of PMDs we show that these distributions can be learned from Ok(1/є2) samples in polyk(1/є)-time, removing the quasi-polynomial dependence of the running time on 1/є from prior work.


foundations of computer science | 2015

On the Structure, Covering, and Learning of Poisson Multinomial Distributions

Constantinos Daskalakis; Gautam Kamath; Christos Tzamos

An (n, k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of n independent random vectors supported on the set Bk={e1,...,ek} of standard basis vectors in Rk. We prove a structural characterization of these distributions, showing that, for all ε > 0, any (n, k)-Poisson multinomial random vector is ε-close, in total variation distance, to the sum of a discretized multidimensional Gaussian and an independent (poly(k/ε), k)-Poisson multinomial random vector. Our structural characterization extends the multi-dimensional CLT of Valiant and Valiant, by simultaneously applying to all approximation requirements ε. In particular, it overcomes factors depending on log n and, importantly, the minimum Eigen value of the PMDs covariance matrix. We use our structural characterization to obtain an ε-cover, in total variation distance, of the set of all (n, k)-PMDs, significantly improving the cover size of Daskalakis and Papadimitriou, and obtaining the same qualitative dependence of the cover size on n and ε as the k=2 cover of Daskalakis and Papadimitriou. We further exploit this structure to show that (n, k)-PMDs can be learned to within ε in total variation distance from Õk(1/ε) samples, which is near-optimal in terms of dependence on ε and independent of n. In particular, our result generalizes the single-dimensional result of Daskalakis, Diakonikolas and Servedio for Poisson binomials to arbitrary dimension. Finally, as a corollary of our results on PMDs, we give a Õk(1/ε2) sample algorithm for learning (n, k)-sums of independent integer random variables (SIIRVs), which is near-optimal for constant k.


international workshop and international workshop on approximation, randomization, and combinatorial optimization. algorithms and techniques | 2015

A Chasm Between Identity and Equivalence Testing with Conditional Queries.

Jayadev Acharya; Clément L. Canonne; Gautam Kamath

A recent model for property testing of probability distributions (Chakraborty et al., ITCS 2013, Canonne et al., SICOMP 2015) enables tremendous savings in the sample complexity of testing algorithms, by allowing them to condition the sampling on subsets of the domain. In particular, Canonne, Ron, and Servedio (SICOMP 2015) showed that, in this setting, testing identity of an unknown distribution


international symposium on information theory | 2015

Adaptive estimation in weighted group testing

Jayadev Acharya; Clément L. Canonne; Gautam Kamath

D


neural information processing systems | 2015

Optimal testing for properties of distributions

Jayadev Acharya; Constantinos Daskalakis; Gautam Kamath

(whether


conference on learning theory | 2014

Faster and Sample Near-Optimal Algorithms for Proper Learning Mixtures of Gaussians

Constantinos Daskalakis; Gautam Kamath

D=D^\ast


international conference on machine learning | 2017

Being Robust (in High Dimensions) Can Be Practical.

Ilias Diakonikolas; Gautam Kamath; Daniel M. Kane; Jerry Li; Ankur Moitra; Alistair Stewart

for an explicitly known


symposium on discrete algorithms | 2018

Testing ising models

Constantinos Daskalakis; Nishanth Dikkala; Gautam Kamath

D^\ast


symposium on discrete algorithms | 2018

Robustly learning a gaussian: getting optimal error, efficiently

Ilias Diakonikolas; Gautam Kamath; Daniel M. Kane; Jerry Li; Ankur Moitra; Alistair Stewart

) can be done with a constant number of queries, independent of the support size

Collaboration


Dive into the Gautam Kamath's collaboration.

Top Co-Authors

Avatar

Constantinos Daskalakis

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jayadev Acharya

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alistair Stewart

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Christos Tzamos

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel M. Kane

University of California

View shared research outputs
Top Co-Authors

Avatar

Ilias Diakonikolas

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jerry Li

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ankur Moitra

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nishanth Dikkala

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge