Mostefa Golea
University of Ottawa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mostefa Golea.
EPL | 1990
Mario Marchand; Mostefa Golea; P. Ruján
We consider a perceptron with Ni input units, one output and a yet unspecified number of hidden units. This perceptron must be able to learn a given but arbitrary set of input-output examples. By sequential learning we mean that groups of patterns, pertaining to the same class, are sequentially separated from the rest by successively adding hidden units until the remaining patterns are all in the same class. We prove that the internal representations obtained by such procedures are linearly separable. Preliminary numerical tests of an algorithm implementing these ideas are presented and compare favourably with results of other growth algorithms.
EPL | 1990
Mostefa Golea; Mario Marchand
This paper explores the application of neural network principles to the construction of decision trees from examples. We consider the problem of constructing a tree of perceptrons able to execute a given but arbitrary Boolean function defined on Ni input bits. We apply a sequential (from one tree level to the next) and parallel (for neurons in the same level) learning procedure to add hidden units until the task in hand is performed. At each step, we use a perceptron-type algorithm over a suitable defined input space to minimize a classification error. The internal representations obtained in this way are linearly separable. Preliminary results of this algorithm are presented.
Machine Learning | 1994
Thomas R. Hancock; Mostefa Golea; Mario Marchand
We investigate, within the PAC learning model, the problem of learning nonoverlapping perceptron networks (also known as read-once formulas over a weighted threshold basis). These are loop-free neural nets in which each node has only one outgoing weight. We give a polynomial time algorithm that PAC learns any nonoverlapping perceptron network using examples and membership queries. The algorithm is able to identify both the architecture and the weight values necessary to represent the function to be learned. Our results shed some light on the effect of the overlap on the complexity of learning in neural networks.
Neural Computation | 1993
Mostefa Golea; Mario Marchand
We present an algorithm that PAC learns any perceptron with binary weights and arbitrary threshold under the family of product distributions. The sample complexity of this algorithm is of O[(n/)4 ln(n/)] and its running time increases only linearly with the number of training examples. The algorithm does not try to find an hypothesis that agrees with all of the training examples; rather, it constructs a binary perceptron based on various probabilistic estimates obtained from the training examples. We show that, under the restricted case of the uniform distribution and zero threshold, the algorithm reduces to the well known clipped Hebb rule. We calculate exactly the average generalization rate (i.e., the learning curve) of the algorithm, under the uniform distribution, in the limit of an infinite number of dimensions. We find that the error rate decreases exponentially as a function of the number of training examples. Hence, the average case analysis gives a sample complexity of O[n ln(1/)], a large improvement over the PAC learning analysis. The analytical expression of the learning curve is in excellent agreement with the extensive numerical simulations. In addition, the algorithm is very robust with respect to classification noise.
conference on learning theory | 1993
Mostefa Golea; Mario Marchand
We investigate the clipped Hebb rule for learning different multilayer networks of nonoverlapping perceptrons with binary weights and zero thresholds when the examples are generated according to the uniform distribution. Using the central limit theorem and very simple counting arguments, we calculate exactly its learning curves (i.e. the generalization rates as a function of the number of training examples) in the limit of a large number of inputs. We find that the learning curves converge exponentially rapidly to perfect generalization. These results are very encouraging given the simplicity of the learning rule. The analytic expressions of the learning curves are in excellent agreement with the numerical simulations, even for moderate values of the number of inputs. From: Proceedings of the Sixth Annual ACM Conference on Computational Learning Theory, pp. 151–157, Santa-Cruz CA, July 26–28, 1993. ACM Press.
Neural Networks | 1996
Mostefa Golea; Mario Marchand; Thomas R. Hancock
We investigate the learnability, under the uniform distribution, of neural concepts that can be represented as simple combinations of nonoverlapping perceptrons (also called μ-perceptrons) with binary weights and arbitrary thresholds. Two perceptrons are said to be nonoverlapping if they do not share any input variables. Specifically, we investigate, within the distribution-specific PAC model, the learnability of μ-perceptron unions, decision lists, and generalized decision lists. In contrast to most neural network learning algorithms, we do not assume that the architecture of the network is known in advance. Rather, it is the task of the algorithm to find both the architecture of the net and the weight values necessary to represent the function to be learned. We give polynomial time algorithms for learning these restricted classes of networks. The algorithms work by estimating various statistical quantities that yield enough information to infer, with high probability, the target concept. Because the algorithms are statistical in nature, they are robust against large amounts of random classification noise.
computational intelligence | 1993
Mostefa Golea; Mario Marchand
We investigate the problem of learning two‐layer neural nets of nonoverlapping perceptrons where each input unit is connected to one and only one hidden unit. We first show that this restricted problem with no overlap at all between the receptive fields of the hidden units is as hard as the general problem (with total overlap) if the learner uses examples only. However, if membership queries are allowed, the restricted problem is indeed easier to solve. We give a learning algorithm that uses examples and membership queries to PAC learn the intersection of K‐nonoverlapping perceptrons, regardless of whether the instance space in Boolean, discrete, or continuous. An extension of this algorithm is proven to PAC learn two‐layer nets with K‐nonoverlapping perceptrons. The simulations performed indicate that both algorithms are fast and efficient.
Network: Computation In Neural Systems | 1993
Mario Marchand; Mostefa Golea
neural information processing systems | 1992
Mostefa Golea; Mario Marchand; Thomas R. Hancock
Archive | 1993
Mario Marchand; Mostefa Golea