Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam R. Klivans is active.

Publication


Featured researches published by Adam R. Klivans.


SIAM Journal on Computing | 2002

Graph Nonisomorphism Has Subexponential Size Proofs Unless the Polynomial-Time Hierarchy Collapses

Adam R. Klivans; Dieter van Melkebeek

Traditional hardness versus randomness results focus on time-efficient randomized decision procedures. We generalize these trade-offs to a much wider class of randomized processes. We work out various applications, most notably to derandomizing Arthur-Merlin games. We show that every language with a bounded round Arthur-Merlin game has subexponential size membership proofs for infinitely many input lengths unless exponential time coincides with the third level of the polynomial-time hierarchy (and hence the polynomial-time hierarchy collapses). Since the graph nonisomorphism problem has a bounded round Arthur-Merlin game, this provides the first strong evidence that graph nonisomorphism has subexponential size proofs. We also establish hardness versus randomness trade-offs for space bounded computation.


Journal of Computer and System Sciences | 2004

Learning DNF in time 2 õ ( n 1/3 )

Adam R. Klivans; Rocco A. Servedio

Using techniques from learning theory, we show that any s-term DNF over n variables can be computed by a polynomial threshold function of degree O(n1/3 log s). This upper bound matches, up to a logarithmic factor, the longstanding lower bound given by Minsky and Papert in their 1968 book Perceptrons. As a consequence of this upper bound we obtain the fastest known algorithm for learning polynomial size DNF, one of the central problems in computational learning theory.


foundations of computer science | 2002

Learning intersections and thresholds of halfspaces

Adam R. Klivans; Ryan O'Donnell; Rocco A. Servedio

We give the first polynomial time algorithm to learn any function of a constant number of halfspaces under the uniform distribution to within any constant error parameter. We also give the first quasipolynomial time algorithm for learning any function of a polylog number of polynomial-weight halfspaces under any distribution. As special cases of these results we obtain algorithms for learning intersections and thresholds of halfspaces. Our uniform distribution learning algorithms involve a novel non-geometric approach to learning halfspaces; we use Fourier techniques together with a careful analysis of the noise sensitivity of functions of halfspaces. Our algorithms for learning under any distribution use techniques from real approximation theory to construct low degree polynomial threshold functions.


symposium on the theory of computing | 2001

Randomness efficient identity testing of multivariate polynomials

Adam R. Klivans; Daniel A. Spielman

We present a randomized polynomial time algorithm to determine if a multivariate polynomial is zero using <italic>O(\log mnδ)</italic> random bits where <italic>n</italic> is the number of variables, <italic>m</italic> is the number of monomials, and <italic>δ</italic> is the total degree of the unknown polynomial. All other known randomized identity tests (see for example [7, 12, 1]) use <italic>ω(n)</italic> random bits even when the polynomial is sparse and has low total degree. In such cases our algorithm has an exponential savings in randomness. In addition, we obtain the first polynomial time algorithm for interpolating sparse polynomials over finite fields of large characteristic. Our approach uses an error correcting code combined with the randomness optimal isolation lemma of [8] and yields a generalized isolation lemma which works with respect to a set of linear forms over a base set.


symposium on the theory of computing | 2001

Learning DNF in time

Adam R. Klivans; Rocco A. Servedio

Using techniques from learning theory, we show that any s-term DNF over n variables can be computed by a polynomial threshold function of degree O(n^{1/3} \log s). This upper bound matches, up to a logarithmic factor, the longstanding lower bound given by Minsky and Papert in their 1968 book {\em Perceptrons}. As a consequence of this upper bound we obtain the fastest known algorithm for learning polynomial size DNF, one of the central problems in computational learning theory.


foundations of computer science | 2006

Cryptographic Hardness for Learning Intersections of Halfspaces

Adam R. Klivans; Alexander A. Sherstov

We give the first representation-independent hardness results for PAC learning intersections of halfspaces, a central concept class in computational learning theory. Our hardness results are derived from two public-key cryptosystems due to Regev, which are based on the worst-case hardness of well-studied lattice problems. Specifically, we prove that a polynomial-time algorithm for PAC learning intersections of n^\in halfspaces (for a constant \in \ge 0) in n dimensions would yield a polynomial-time solution to \tilde O\left( {n^{1.5} } \right)-uSVP (unique shortest vector problem). We also prove that PAC learning intersections of n^\in low-weight halfspaces would yield a polynomial-time quantum solution to \tilde O\left( {n^{1.5} } \right)-SVP and \tilde O\left( {n^{1.5} } \right)-SIVP (shortest vector problem and shortest independent vector problem, respectively). By making stronger assumptions about the hardness of uSVP, SVP, and SIVP, we show that there is no polynomial-time algorithm for learning intersections of log^c n halfspaces in n dimensions, for c \ge 0 sufficiently large. Our approach also yields the first representation-independent hardness results for learning polynomial-size depth-2 neural networks and polynomial-size depth-3 arithmetic circuits.


symposium on the theory of computing | 1999

Graph nonisomorphism has subexponential size proofs unless the polynomial-time hierarchy collapses

Adam R. Klivans; Dieter van Melkebeek

We establish hardness versus randomness trade-offs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round Arthur-Merlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with access to satisfiability. We show that every language with a bounded round Arthur-Merlin game has subexponential size membership proofs for infinitely many input lengths unless the polynomial-time hierarchy collapses. This provides the first strong evidence that graph nonisomorphism has subexponential size proofs. We set up a general framework for derandomization which encompasses more than the traditional model of randomized computation. For a randomized procedure to fit within this framework, we only require that for any fixed input the complexity of checking whether the procedure succeeds on a given random bit sequence is not too high. We then apply our derandomization technique to four fundamental complexity theoretic constructions: The Valiant-Vazirani random hashing technique which prunes the number of satisfying assignments of a Boolean formula to one, and related procedures like computing satisfying assignments to Boolean formulas non-adaptively given access to an oracle for satisfiability. The algorithm of Bshouty et al. for learning Boolean circuits. Constructing matrices with high rigidity. Constructing polynomial-size universal traversal sequences. We also show that if linear space requires exponential size circuits, then space bounded randomized computations can be simulated deterministically with only a constant factor overhead in space.


foundations of computer science | 1999

Boosting and hard-core sets

Adam R. Klivans; Rocco A. Servedio

This paper connects two fundamental ideas from theoretical computer science hard-core set construction, a type of hardness amplification from computational complexity, and boosting, a technique from computational learning theory. Using this connection we give fruitful applications of complexity-theoretic techniques to learning theory and vice versa. We show that the hard-core set construction of R. Impagliazzo (1995), which establishes the existence of distributions under which boolean functions are highly inapproximable, may be viewed as a boosting algorithm. Using alternate boosting methods we give an improved bound for hard-core set construction which matches known lower bounds from boosting and thus is optimal within this class of techniques. We then show how to apply techniques from R. Impagliazzo to give a new version of Jacksons celebrated Harmonic Sieve algorithm for learning DNF formulae under the uniform distribution using membership queries. Our new version has a significant asymptotic improvement in running time. Critical to our arguments is a careful analysis of the distributions which are employed in both boosting and hard-core set constructions.


symposium on the theory of computing | 2002

Learnability beyond AC 0

Jeffrey C. Jackson; Adam R. Klivans; Rocco A. Servedio

We give an algorithm to learn constant-depth polynomial-size circuits augmented with majority gates under the uniform distribution using random examples only. For circuits which contain a polylogarithmic number of majority gates the algorithm runs in quasipolynomial time. This is the first algorithm for learning a more expressive circuit class than the class AC0 of constant-depth polynomial-size circuits, a class which was shown to be learnable in quasipolynomial time by Linial, Mansour and Nisan in 1989. Our approach combines an extension of some of the Fourier analysis from Linial et al. with hypothesis boosting. We also show that under a standard cryptographic assumption our algorithm is essentially optimal with respect to both running time and expressiveness (number of majority gates) of the circuits being learned.


symposium on the theory of computing | 2008

List-decoding reed-muller codes over small fields

Parikshit Gopalan; Adam R. Klivans; David Zuckerman

We present the first local list-decoding algorithm for the rth order Reed-Muller code RM(2,m) over F for r ≥ 2. Given an oracle for a received word R: Fm -< F, our randomized local list-decoding algorithm produces a list containing all degree r polynomials within relative distance (2-r - ε) from R for any ε < 0 in time poly(mr,ε-r). The list size could be exponential in m at radius 2-r, so our bound is optimal in the local setting. Since RM(2,m) has relative distance 2-r, our algorithm beats the Johnson bound for r ≥ 2. In the setting where we are allowed running-time polynomial in the block-length, we show that list-decoding is possible up to even larger radii, beyond the minimum distance. We give a deterministic list-decoder that works at error rate below J(21-r), where J(δ) denotes the Johnson radius for minimum distance δ. This shows that RM(2,m) codes are list-decodable up to radius η for any constant η < 1/2 in time polynomial in the block-length. Over small fields Fq, we present list-decoding algorithms in both the global and local settings that work up to the list-decoding radius. We conjecture that the list-decoding radius approaches the minimum distance (like over F), and prove this holds true when the degree is divisible by q-1.

Collaboration


Dive into the Adam R. Klivans's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pravesh Kothari

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Surbhi Goel

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Prahladh Harsha

Tata Institute of Fundamental Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge