Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nader H. Bshouty is active.

Publication


Featured researches published by Nader H. Bshouty.


Journal of the ACM | 2000

Learning functions represented as multiplicity automata

Amos Beimel; Francesco Bergadano; Nader H. Bshouty; Eyal Kushilevitz; Stefano Varricchio

We study the learnability of multiplicity automata in Angluins exact learning model, and we investigate its applications. Our starting point is a known theorem from automata theory relating the number of states in a minimal multiplicity automaton for a function to the rank of its Hankel matrix. With this theorem in hand, we present a new simple algorithm for learning multiplicity automata with improved time and query complexity, and we prove the learnability of various concept classes. These include (among others): -The class of disjoint DNF, and more generally satisfy-O(1) DNF.-The class of polynomials over finite fields.-The class of bounded-degree polynomials over infinite fields.-The class of XOR of terms.-Certain classes of boxes in high dimensions.In addition, we obtain the best query complexity for several classes known to be learnable by other methods such as decision trees and polynomials over GF(2). While multiplicity automata are shown to be useful to prove the learnability of some subclasses of DNF formulae and various other classes, we study the limitations of this method. We prove that this method cannot be used to resolve the learnability of some other open problems such as the learnability of general DNF formulas or even k-term DNF for k = ω(log n) or satisfy-s DNF formulas for s = ω(1). These results are proven by exhibiting functions in the above classes that require multiplicity automata with super-polynomial number of states.


algorithmic learning theory | 1999

PAC learning with nasty noise

Nader H. Bshouty; Nadav Eiron; Eyal Kushilevitz

We introduce a new model for learning in the presence of noise, which we call the Nasty Noise model. This model generalizes previously considered models of learning with noise. The learning process in this model, which is a variant of the PAC model, proceeds as follows: Suppose that the learning algorithm during its execution asks for m examples. The examples that the algorithm gets are generated by a nasty adversary that works according to the following steps. First, the adversary chooses m examples (independently) according to a fixed (but unknown to the learning algorithm) distribution D as in the PAC-model. Then the powerful adversary, upon seeing the specific m examples that were chosen (and using his knowledge of the target function, the distribution D and the learning algorithm), is allowed to remove a fraction of the examples at its choice, and replace these examples by the same number of arbitrary examples of its choice; the m modified examples are then given to the learning algorithm. The only restriction on the adversary is that the number of examples that the adversary is allowed to modify should be distributed according to a binomial distribution with parameters η (the noise rate) and m.On the negative side, we prove that no algorithm can achieve accuracy of e > 2η in learning any non-trivial class of functions. We also give some lower bounds on the sample complexity required to achieve accuracy e = 2η + Δ. On the positive side, we show that a polynomial (in the usual parameters, and in 1/(e-2η)) number of examples suffice for learning any class of finite VC-dimension with accuracy e > 2η. This algorithm may not be efficient; however, we also show that a fairly wide family of concept classes can be efficiently learned in the presence of nasty noise.


conference on learning theory | 1999

More efficient PAC-learning of DNF with membership queries under the uniform distribution

Nader H. Bshouty; Jeffrey C. Jackson; Christino Tamon

An efficient algorithm exists for learning disjunctive normal form (DNF) expressions in the uniformdistribution PAC learning model with membership queries [J97], but in practice the algorithm can only be applied to small problems. We present several modifications to the algorithm that substantially improve its asymptotic efficiency. First, we show how to significantly improve the time and sample complexity of a key subprogram, resulting in similar improvements in the bounds on the overall DNF algorithm. We also apply known methods to convert the resulting algorithm to an attribute efficient algorithm. Furthermore, we develop techniques for lower bounding the sample size required for PAC learning with membership queries under a fixed distribution and apply this technique to the uniform-distribution DNF learning problem. Finally, we present a learning algorithm for DNF that is attribute efficient in its use of random bits.


conference on learning theory | 1993

Asking questions to minimize errors

Nader H. Bshouty; Sally A. Goldman; Thomas R. Hancock; Sleiman Matar

A number of efficient learning algorithms achieve exact identification of an unknown function from some class using membership and equivalence queries. Using a standard transformation such algorithms can easily be converted to on-line learning algorithms that use membership queries. Under such a transformation the number of equivalence queries made by the query algorithm directly corresponds to the number of mistakes made by the on-line algorithm. In this paper we consider several of the natural classes known to be learnable in this setting, and investigate the minimum number of equivalence queries with accompanying counterexamples (or equivalently the minimum number of mistakes in the on-line model) that can be made by a learning algorithm that makes a polynomial number of membership queries and uses polynomial computation time. We are able both to reduce the number of equivalence queries used by the previous algorithms and often to prove matching lower bounds. As an example, consider the class of DNF formulas over n variables with at mostk=O(logn) terms. Previously, the algorithm of Blum and Rudich provided the best known upper bound of 2O(k)lognfor the minimum number of equivalence queries needed for exact identification. We greatly improve on this upper bound showing that exactlykcounterexamples are needed if the learner knowska priori and exactlyk+1 counterexamples are needed if the learner does not knowka priori. This exactly matches known lower bounds of Bshouty and Cleve. For many of our results we obtain a complete characterization of the trade-off between the number of membership and equivalence queries needed for exact identification. The classes we consider here are monotone DNF formulas, Horn sentences,O(logn)-term DNF formulas, read-ksat-jDNF formulas, read-once formulas over various bases, and deterministic finite automata.


SIAM Journal on Computing | 1999

Learning DNF over the Uniform Distribution Using a Quantum Example Oracle

Nader H. Bshouty; Jeffrey C. Jackson

We generalize the notion of probably approximately correct (PAC) learning from an example oracle to a notion of efficient learning on a quantum computer using a quantum example oracle. This quantum example oracle is a natural extension of the traditional PAC example oracle, and it immediately follows that all PAC-learnable function classes are learnable in the quantum model. Furthermore, we obtain positive quantum learning results for classes that are not known to be PAC learnable. Specifically, we show that disjunctive normal form (DNF) is efficiently learnable with respect to the uniform distribution by a quantum algorithm using a quantum example oracle. While it was already known that DNF is uniform-learnable using a membership oracle, we prove that a quantum example oracle with respect to uniform is less powerful than a membership oracle.


conference on learning theory | 1995

Learning Boolean Read-Once Formulas over Generalized Bases

Nader H. Bshouty; Thomas R. Hancock; Lisa Hellerstein

A formula is read-once if each variable appears on at most a single input. Previously, Angluin, Hellerstein, and Karpinski gave a polynomial time algorithm hat uses membership and equivalence queries to identify exactly read once boolean formulas over the basis {AND, OR, NOT}. In this paper we consider natural generalizations of this basis, and develop exact identification algorithms for more powerful classes of read-once formulas. We show that read-once formulas over the basis of arbitrary boolean functions of constant fan-in L (i.e., any ?: {0,1}1 ? c ? k ? {0,1}, where k is a constant) are exactly identifiable i polynomial time using membership and equivalence queries. We also show that read-once formulas over the basis of arbitrary symmetric boolean functions are exactly identifiable in polynomial time in this model. Given standard cryptographic assumptions, there is no polynomial time identification algorithm for read-twice formulas over either of these bases in the model. We further show that for any basis class B meeting certain technical conditions, any polynomial time identification algorithm for read-once formulas over B can be extended to a polynomial time identification algorithm for read-once formulas over the union of B and the arbitrary functions of constant fan-in. As a result, read-once formulas over the union of arbitrary symmetric and arbitrary constant fan-in gates are also exactly identifiable in polynomial time using membership and equivalence queries.


SIAM Journal on Computing | 1995

Learning Arithmetic Read-Once Formulas

Nader H. Bshouty; Thomas R. Hancock; Lisa Hellerstein

A formula is read-once if each variable appears at most once in it. An arithmetic read-once formula is one in which the operators are addition, subtraction, multiplication, and division. We present polynomial time algorithms for exact learning of arithmetic read-once formulas over a field. We present a membership and equivalence query algorithm that identifies arithmetic read-once formulas over an arbitrary field. We present a randomized membership query algorithm (i. e. a randomized black box interpolation algorithm) that identifies such formulas over finite fields with at least


Journal of Computer and System Sciences | 1998

On Interpolating Arithmetic Read-Once Formulas with Exponentiation

Daoud Bshouty; Nader H. Bshouty

2n+5


symposium on the theory of computing | 1992

Learning arithmetic read-once formulas

Nader H. Bshouty; Thomas R. Hancock; Lisa Hellerstein

elements (where


european conference on computational learning theory | 1997

On learning branching programs and small depth circuits

Francesco Bergadano; Nader H. Bshouty; Christino Tamon; Stefano Varricchio

n

Collaboration


Dive into the Nader H. Bshouty's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eyal Kushilevitz

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amos Beimel

Ben-Gurion University of the Negev

View shared research outputs
Researchain Logo
Decentralizing Knowledge