Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey C. Jackson is active.

Publication


Featured researches published by Jeffrey C. Jackson.


Journal of Computer and System Sciences | 1997

An Efficient Membership-Query Algorithm for Learning DNF with Respect to the Uniform Distribution

Jeffrey C. Jackson

We present a membership-query algorithm for efficiently learning DNF with respect to the uniform distribution. In fact, the algorithm properly learns with respect to uniform the class TOP of Boolean functions expressed as a majority vote over parity functions. We also describe extensions of this algorithm for learning DNF over certain nonuniform distributions and for learning a class of geometric concepts that generalizes DNF. Furthermore, we show that DNF is weakly learnable with respect to uniform from noisy examples. Our strong learning algorithm utilizes one of Freunds boosting techniques and relies on the fact that boosting does not require a completely distribution-independent weak learner. The boosted weak learner is a nonuniform extension of a parity-finding algorithm discovered by Goldreich and Levin.


symposium on the theory of computing | 1994

Weakly learning DNF and characterizing statistical query learning using Fourier analysis

Avrim Blum; Merrick L. Furst; Jeffrey C. Jackson; Michael J. Kearns; Yishay Mansour; Steven Rudich

We present new results, both positive and negative, on the well-studied problem of learning disjunctive normal form (DNF) expressions. We first prove that an algorithm due to Kushilevitz and Mansour [16] can be used to weakly learn DNF using membership queries in polynomial time, with respect to the uniform distribution on the inputs. This is the first positive result for learning unrestricted DNF expressions in polynomial time in any nontrivial formal model of learning. It provides a sharp contrast with the results of Kharitonov [15], who proved that ACO is not efficiently learnable in the same model (given certain plausible cryptographic assumptions). We also present efficient learning algorithms in various models for the read-k and SAT-k subclasses of DNF. For our negative results, we turn our attention to the recently introduced statistical query model of learning [11]. This model is a restricted version of the popular Probably Approximately Correct (PAC) model [23], and practically every class known to be efficiently learnable in the PAC model is in fact learnable in the statistical query model [11]. Here we give a general characterization of the complexity of statistical query learning in terms of the number of uncorrelated functions in the concept class. This is a distributiondependent quantity yielding upper and lower bounds on the number of st atistical queries required for learning on any input distribution. As a corollary, we obtain that DNF expressions and decision trees are not even weakly learnable with ●This research M sponsored in part by the Wr]ght Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330 Support also M sponsored by the National Sc]ence Foundation under Grant No CC-91 19319. Blum also supported m part by NSF National Young Investigator grant CCR9357793 Views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing official po!lcles or endorsements, either expressed or implied, of Wright Laboratory or the United States Government, or NSF tcontact ~“thor Address: AT&T Bell Laboratcmes, Room 2A423, 600 Mountain Avenue, P.O. Box 636, Murray Hill, NJ 07974 Electronic mail. mkearns@research .at t corn ~Thi~ research ~a~ ~“pported in p~~t by The Israel science Foun. datlon administered by The Israel Academy of Sc]ence and Humanities and by a grant of the Israeli Ministry of Science and Technology Permission to co y without fee all or part of this material is granted provided%atthe copies are not madeordistrftrutectfor direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association of Computing Machinery. To copy otherwise, or to republish, requires a fee ancf/or specific permission. STOC 945/94 Montreal, Quebec, Canada Q 1994 ACM 0-89791 -663-8/94/0005..


foundations of computer science | 1994

An efficient membership-query algorithm for learning DNF with respect to the uniform distribution

Jeffrey C. Jackson

3.50 respect to the uniform input distribution in polynomial time in the statistical query model. This result is informationtheoretic and therefore does not rely on any unproven assumptions. It demonstrates that no simple modification of the existing algorithms in the computaticmal learning theory literature for learning various restricted forms of DNF and decision trees from passive random examples (and also several algorithms proposed in the experimental machine learning communities, such as the ID3 algorithm for decision trees [22] and its variants) will solve the general problem. The unifying tool for all of our results is the Fourier analysis of a finite class of boolean functions 011 the hypercube.


Journal of Machine Learning Research | 2004

Preference Elicitation and Query Learning

Avrim Blum; Jeffrey C. Jackson; Tuomas Sandholm; Martin Zinkevich

We present a membership-query algorithm for efficiently learning DNF with respect to the uniform distribution. In fact, the algorithm properly learns the more general class of functions that are computable as a majority of polynomially-many parity functions. We also describe extensions of this algorithm for learning DNF over certain nonuniform distributions and from noisy examples as well as for learning a class of geometric concepts that generalizes DNF. The algorithm utilizes one of Freunds boosting techniques and relies on the fact that boosting does not require a completely distribution-independent weak learner. The boosted weak learner is a nonuniform extension of a Fourier-based algorithm due to Kushilevitz and Mansour (1991).<<ETX>>


symposium on the theory of computing | 2002

Learnability beyond AC 0

Jeffrey C. Jackson; Adam R. Klivans; Rocco A. Servedio

In this paper we initiate an exploration of relationships between preference elicitation, a learning-style problem that arises in combinatorial auctions, and the problem of learning via queries studied in computational learning theory. Preference elicitation is the process of asking questions about the preferences of bidders so as to best divide some set of goods. As a learning problem, it can be thought of as a setting in which there are multiple target concepts that can each be queried separately, but where the goal is not so much to learn each concept as it is to produce an optimal example. In this work, we prove a number of similarities and differences between preference elicitation and query learning, giving both separation results and proving some connections between these problems.


conference on learning theory | 1991

Improved learning of AC 0 functions

Merrick L. Furst; Jeffrey C. Jackson; Sean W. Smith

We give an algorithm to learn constant-depth polynomial-size circuits augmented with majority gates under the uniform distribution using random examples only. For circuits which contain a polylogarithmic number of majority gates the algorithm runs in quasipolynomial time. This is the first algorithm for learning a more expressive circuit class than the class AC0 of constant-depth polynomial-size circuits, a class which was shown to be learnable in quasipolynomial time by Linial, Mansour and Nisan in 1989. Our approach combines an extension of some of the Fourier analysis from Linial et al. with hypothesis boosting. We also show that under a standard cryptographic assumption our algorithm is essentially optimal with respect to both running time and expressiveness (number of majority gates) of the circuits being learned.


conference on learning theory | 1992

A computational model of teaching

Jeffrey C. Jackson; Andrew Tomkins

Two extensions of the Linial, Mansour, Nisan AC 0 learning algorithm are presented. The LMN method works when input examples are drawn uniformly. The new algorithms improve on theirs by performing well when given inputs drawn from unknown, mutually independent distributions. A variant of the one of the algorithms is conjectured to work in an even broader setting.


conference on learning theory | 1999

More efficient PAC-learning of DNF with membership queries under the uniform distribution

Nader H. Bshouty; Jeffrey C. Jackson; Christino Tamon

Goldman and Kearns [GK91] recently introduced a notion of the teaching dimension of a concept class. The teaching dimension is intended to capture the combinatorial difficulty of teaching a concept class. We present a computational analog which allows us to make statements about bounded-complexity teachers and learners, and we extend the model by incorporating trusted information. Under this extended model, we modify algorithms for learning several expressive classes in the exact identification model of Angluin [Ang88]. We study the relationships between variants of these models, and also touch on a relationship with distribution-free learning.


SIAM Journal on Computing | 1999

Learning DNF over the Uniform Distribution Using a Quantum Example Oracle

Nader H. Bshouty; Jeffrey C. Jackson

An efficient algorithm exists for learning disjunctive normal form (DNF) expressions in the uniformdistribution PAC learning model with membership queries [J97], but in practice the algorithm can only be applied to small problems. We present several modifications to the algorithm that substantially improve its asymptotic efficiency. First, we show how to significantly improve the time and sample complexity of a key subprogram, resulting in similar improvements in the bounds on the overall DNF algorithm. We also apply known methods to convert the resulting algorithm to an attribute efficient algorithm. Furthermore, we develop techniques for lower bounding the sample size required for PAC learning with membership queries under a fixed distribution and apply this technique to the uniform-distribution DNF learning problem. Finally, we present a learning algorithm for DNF that is attribute efficient in its use of random bits.


conference on learning theory | 1996

On restricted-focus-of-attention learnability of Boolean functions

Andreas Birkendorf; Eli Dichterman; Jeffrey C. Jackson; Norbert Klasner; Hans Ulrich Simon

We generalize the notion of probably approximately correct (PAC) learning from an example oracle to a notion of efficient learning on a quantum computer using a quantum example oracle. This quantum example oracle is a natural extension of the traditional PAC example oracle, and it immediately follows that all PAC-learnable function classes are learnable in the quantum model. Furthermore, we obtain positive quantum learning results for classes that are not known to be PAC learnable. Specifically, we show that disjunctive normal form (DNF) is efficiently learnable with respect to the uniform distribution by a quantum algorithm using a quantum example oracle. While it was already known that DNF is uniform-learnable using a membership oracle, we prove that a quantum example oracle with respect to uniform is less powerful than a membership oracle.

Collaboration


Dive into the Jeffrey C. Jackson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Avrim Blum

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Nader H. Bshouty

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nader H. Bshouty

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Adam R. Klivans

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Merrick L. Furst

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge