Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pravesh Kothari is active.

Publication


Featured researches published by Pravesh Kothari.


architectural support for programming languages and operating systems | 2010

A randomized scheduler with probabilistic guarantees of finding bugs

Sebastian Burckhardt; Pravesh Kothari; Madanlal Musuvathi; Santosh Nagarakatte

This paper presents a randomized scheduler for finding concurrency bugs. Like current stress-testing methods, it repeatedly runs a given test program with supplied inputs. However, it improves on stress-testing by finding buggy schedules more effectively and by quantifying the probability of missing concurrency bugs. Key to its design is the characterization of the depth of a concurrency bug as the minimum number of scheduling constraints required to find it. In a single run of a program with n threads and k steps, our scheduler detects a concurrency bug of depth d with probability at least 1/nkd-1. We hypothesize that in practice, many concurrency bugs (including well-known types such as ordering errors, atomicity violations, and deadlocks) have small bug-depths, and we confirm the efficiency of our schedule randomization by detecting previously unknown and known concurrency bugs in several production-scale concurrent programs.


symposium on the theory of computing | 2015

Sum of Squares Lower Bounds from Pairwise Independence

Boaz Barak; Siu On Chan; Pravesh Kothari

We prove that for every ε>0 and predicate P:{0,1}k-> {0,1} that supports a pairwise independent distribution, there exists an instance I of the Max P constraint satisfaction problem on n variables such that no assignment can satisfy more than a ~(|P-1(1)|)/(2k)+ε fraction of Is constraints but the degree Ω(n) Sum of Squares semidefinite programming hierarchy cannot certify that I is unsatisfiable. Similar results were previously only known for weaker hierarchies.


foundations of computer science | 2016

A Nearly Tight Sum-of-Squares Lower Bound for the Planted Clique Problem

Boaz Barak; Samuel B. Hopkins; Jonathan A. Kelner; Pravesh Kothari; Ankur Moitra; Aaron Potechin

We prove that with high probability over the choice of a random graph G from the Erdös-Rényi distribution G(n,1/2), the nO(d)-time degree d Sum-of-Squares semidefinite programming relaxation for the clique problem will give a value of at least n1/2-c(d/log n)1/2 for some constant c > 0. This yields a nearly tight n1/2-o(1) bound on the value of this program for any degree d = o(log n). Moreover we introduce a new framework that we call pseudo-calibration to construct Sum-of-Squares lower bounds. This framework is inspired by taking a computational analogue of Bayesian probability theory. It yields a general recipe for constructing good pseudo-distributions (i.e., dual certificates for the Sum-of-Squares semidefinite program), and sheds further light on the ways in which this hierarchy differs from others.


international workshop and international workshop on approximation randomization and combinatorial optimization algorithms and techniques | 2014

Embedding Hard Learning Problems Into Gaussian Space

Adam R. Klivans; Pravesh Kothari

We give the first representation-independent hardness result for agnostically learning halfspaces with respect to the Gaussian distribution. We reduce from the problem of learning sparse parities with noise with respect to the uniform distribution on the hypercube (sparse LPN), a notoriously hard problem in theoretical computer science and show that any algorithm for agnostically learning halfspaces requires n (log (1 / )) time under the assumption that k-sparse LPN requires n ( k) time, ruling out a polynomial time algorithm for the problem. As far as we are aware, this is the first representation-independent hardness result for supervised learning when the underlying distribution is restricted to be a Gaussian. We also show that the problem of agnostically learning sparse polynomials with respect to the Gaussian distribution in polynomial time is as hard as PAC learning DNFs on the uniform distribution in polynomial time. This complements the surprising result of Andoni et. al. [1] who show that sparse polynomials are learnable under random Gaussian noise in polynomial time. Taken together, these results show the inherent diculty of designing supervised learning algorithms in Euclidean space even in the presence of strong distributional assumptions. Our results use a novel embedding of random labeled examples from the uniform distribution on the Boolean hypercube into random labeled examples from the Gaussian distribution that allows us to relate the hardness of learning problems on two dierent domains and distributions. 1998 ACM Subject Classification F.2.0. Analysis of Algorithms and Problem Complexity


theory and application of cryptographic techniques | 2018

Limits on Low-Degree Pseudorandom Generators (Or: Sum-of-Squares Meets Program Obfuscation)

Boaz Barak; Zvika Brakerski; Ilan Komargodski; Pravesh Kothari

An m output pseudorandom generator \(\mathcal {G}:(\{\pm 1\}^b)^n \rightarrow \{\pm 1\}^m\) that takes input n blocks of b bits each is said to be \(\ell \)-block local if every output is a function of at most \(\ell \) blocks. We show that such \(\ell \)-block local pseudorandom generators can have output length at most \(\tilde{O}(2^{\ell b} n^{\lceil \ell /2 \rceil })\), by presenting a polynomial time algorithm that distinguishes inputs of the form \(\mathcal {G}(x)\) from inputs where each coordinate is sampled from the uniform distribution on m bits.


international workshop and international workshop on approximation, randomization, and combinatorial optimization. algorithms and techniques | 2012

An Explicit VC-Theorem for Low-Degree Polynomials

Eshan Chattopadhyay; Adam R. Klivans; Pravesh Kothari

Let X ⊆ R n and let \({\cal C}\) be a class of functions mapping ℝ n → { − 1,1}. The famous VC-Theorem states that a random subset S of X of size \(O(\frac{d}{\epsilon^{2}} \log \frac{d}{\epsilon})\), where d is the VC-Dimension of \({\cal C}\), is (with constant probability) an e-approximation for \({\cal C}\) with respect to the uniform distribution on X. In this work, we revisit the problem of constructing S explicitly. We show that for any X ⊆ ℝ n and any Boolean function class \({\cal C}\) that is uniformly approximated by degree k polynomials, an e-approximation S can be be constructed deterministically in time poly(n k ,1/e,|X|) provided that \(\epsilon =\Omega\left(W \cdot \sqrt{\frac{\log{|X|}}{|X|}}\right)\) where W is the weight of the approximating polynomial. Previous work due to Chazelle and Matousek suffers an d O(d) factor in the running time and results in superpolynomial-time algorithms, even in the case where k = O(1).


symposium on the theory of computing | 2018

Robust moment estimation and improved clustering via sum of squares

Pravesh Kothari; Jacob Steinhardt; David Steurer

We develop efficient algorithms for estimating low-degree moments of unknown distributions in the presence of adversarial outliers and design a new family of convex relaxations for k-means clustering based on sum-of-squares method. As an immediate corollary, for any γ > 0, we obtain an efficient algorithm for learning the means of a mixture of k arbitrary distributions in d in time dO(1/γ) so long as the means have separation Ω(kγ). This in particular yields an algorithm for learning Gaussian mixtures with separation Ω(kγ), thus partially resolving an open problem of Regev and Vijayaraghavan regev2017learning. The guarantees of our robust estimation algorithms improve in many cases significantly over the best previous ones, obtained in the recent works. We also show that the guarantees of our algorithms match information-theoretic lower-bounds for the class of distributions we consider. These improved guarantees allow us to give improved algorithms for independent component analysis and learning mixtures of Gaussians in the presence of outliers. We also show a sharp upper bound on the sum-of-squares norms for moment tensors of any distribution that satisfies the Poincare inequality. The Poincare inequality is a central inequality in probability theory, and a large class of distributions satisfy it including Gaussians, product distributions, strongly log-concave distributions, and any sum or uniformly continuous transformation of such distributions. As a consequence, this yields that all of the above algorithmic improvements hold for distributions satisfying the Poincare inequality.


symposium on the theory of computing | 2015

Almost Optimal Pseudorandom Generators for Spherical Caps: Extended Abstract

Pravesh Kothari; Raghu Meka

Halfspaces or linear threshold functions are widely studied in complexity theory, learning theory and algorithm design. In this work we study the natural problem of constructing pseudorandom generators (PRGs) for halfspaces over the sphere, aka spherical caps, which besides being interesting and basic geometric objects, also arise frequently in the analysis of various randomized algorithms (e.g., randomized rounding). We give an explicit PRG which fools spherical caps within error ε and has an almost optimal seed-length of O(log n + log(1/ε) ⋅ log log(1/ε)). For an inverse-polynomially growing error ε, our generator has a seed-length optimal up to a factor of O( log log (n)). The most efficient PRG previously known (due to Kane 2012) requires a seed-length of Ω(log3/2(n)) in this setting. We also obtain similar constructions to fool halfspaces with respect to the Gaussian distribution. Our construction and analysis are significantly different from previous works on PRGs for halfspaces and build on the iterative dimension reduction ideas of Kane et. al. 2011 and Celis et. al. 2013, the classical moment problem from probability theory and explicit constructions of approximate orthogonal designs based on the seminal work of Bourgain and Gamburd 2011 on expansion in Lie groups.


symposium on the theory of computing | 2018

Sum-of-squares meets Nash: lower bounds for finding any equilibrium

Pravesh Kothari; Ruta Mehta

Computing Nash equilibrium (NE) in two-player game is a central question in algorithmic game theory. The main motivation of this work is to understand the power of sum-of-squares method in computing equilibria, both exact and approximate. Previous works in this context have focused on hardness of approximating “best” equilibria with respect to some natural quality measure on equilibria such as social welfare. Such results, however, do not directly relate to the complexity of the problem of finding any equilibrium. In this work, we propose a framework of roundings for the sum-of-squares algorithm (and convex relaxations in general) applicable to finding approximate/exact equilbria in two player bimatrix games. Specifically, we define the notion of oblivious roundings with verification oracle (OV). These are algorithms that can access a solution to the degree d SoS relaxation to construct a list of candidate (partial) solutions and invoke a verification oracle to check if a candidate in the list gives an (exact or approximate) equilibrium. This framework captures most known approximation algorithms in combinatorial optimization including the celebrated semi-definite programming based algorithms for Max-Cut, Constraint-Satisfaction Problems, and the recent works on SoS relaxations for Unique Games/Small-Set Expansion, Best Separable State, and many problems in unsupervised machine learning. Our main results are strong unconditional lower bounds in this framework. Specifically, we show that for є = Θ(1/poly(n)), there’s no algorithm that uses a o(n)-degree SoS relaxation to construct a 2o(n)-size list of candidates and obtain an є-approximate NE. For some constant є, we show a similar result for degree o(log(n)) SoS relaxation and list size no(log(n)). Our results can be seen as an unconditional confirmation, in our restricted algorithmic framework, of the recent Exponential Time Hypothesis for PPAD. Our proof strategy involves constructing a family of games that all share a common sum-of-squares solution but every (approximate) equilibrium of any game is far from every equilibrium of any other game in the family (in either player’s strategy). Along the way, we strengthen the classical unconditional lower bound against enumerative algorithms for finding approximate equilibria due to Daskalakis-Papadimitriou and the classical hardness of computing equilibria due to Gilbow-Zemel.


conference on innovations in theoretical computer science | 2018

Improper Learning by Refuting

Pravesh Kothari; Roi Livni

The sample complexity of learning a Boolean-valued function class is precisely characterized by its Rademacher complexity. This has little bearing, however, on the sample complexity of efficient agnostic learning. We introduce refutation complexity, a natural computational analog of Rademacher complexity of a Boolean concept class and show that it exactly characterizes the sample complexity of efficient agnostic learning. Informally, refutation complexity of a class C is the minimum number of example-label pairs required to efficiently distinguish between the case that the labels correlate with the evaluation of some member of C (structure) and the case where the labels are i.i.d. Rademacher random variables (noise). The easy direction of this relationship was implicitly used in the recent framework for improper PAC learning lower bounds of Daniely and co-authors via connections to the hardness of refuting random constraint satisfaction problems. Our work can be seen as making the relationship between agnostic learning and refutation implicit in their work into an explicit equivalence. In a recent, independent work, Salil Vadhan discovered a similar relationship between refutation and PAC-learning in the realizable (i.e. noiseless) case.

Collaboration


Dive into the Pravesh Kothari's collaboration.

Top Co-Authors

Avatar

Adam R. Klivans

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron Potechin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raghu Meka

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Tselil Schramm

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge