Jean-Pierre Nadal
École Normale Supérieure
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jean-Pierre Nadal.
Complexity | 2002
Gérard Weisbuch; Guillaume Deffuant; Frédéric Amblard; Jean-Pierre Nadal
We present a model of opinion dynamics in which agents adjust continuous opinions as a result of random binary encounters whenever their difference in opinion is below a given threshold. High thresholds yield convergence of opinions toward an average opinion, whereas low thresholds result in several opinion clusters. The model is further generalized to network interactions, threshold heterogeneity, adaptive thresholds, and binary strings of opinions.
Network: Computation In Neural Systems | 1994
Jean-Pierre Nadal; Néstor Parga
We investigate the consequences of maximizing information transfer in a simple neural network (one input layer, one output layer), focusing on the case of nonlinear transfer functions. We assume that both receptive fields (synaptic efficacies) and transfer functions can be adapted to the environment. The main result is that, for bounded and invertible transfer functions, in the case of a vanishing additive output noise, and no input noise, maximization of information (Linskers infomax principle) leads to a factorial code-hence to the same solution as required by the redundancy-reduction principle of Barlow. We also show that this result is valid for linear and, more generally, unbounded, transfer functions, provided optimization is performed under an additive constraint, i.e. which can be written as a sum of terms, each one being specific to one output neuron. Finally, we study the effect of a non-zero input noise. We find that, to first order in the input noise, assumed to be small in comparison with th...
Neural Computation | 1998
Nicolas Brunel; Jean-Pierre Nadal
In the context of parameter estimation and model selection, it is only quite recently that a direct link between the Fisher information and information-theoretic quantities has been exhibited. We give an interpretation of this link within the standard framework of information theory. We show that in the context of population coding, the mutual information between the activity of a large array of neurons and a stimulus to which the neurons are tuned is naturally related to the Fisher information. In the light of this result, we consider the optimization of the tuning curves parameters in the case of neurons responding to a stimulus represented by an angular variable.
Neuron | 2004
Nicolas Brunel; Vincent Hakim; Philippe Isope; Jean-Pierre Nadal; Boris Barbour
It is widely believed that synaptic modifications underlie learning and memory. However, few studies have examined what can be deduced about the learning process from the distribution of synaptic weights. We analyze the perceptron, a prototypical feedforward neural network, and obtain the optimal synaptic weight distribution for a perceptron with excitatory synapses. It contains more than 50% silent synapses, and this fraction increases with storage reliability: silent synapses are therefore a necessary byproduct of optimizing learning and reliability. Exploiting the classical analogy between the perceptron and the cerebellar Purkinje cell, we fitted the optimal weight distribution to that measured for granule cell-Purkinje cell synapses. The two distributions agreed well, suggesting that the Purkinje cell can learn up to 5 kilobytes of information, in the form of 40,000 input-output associations.
EPL | 1986
Jean-Pierre Nadal; G. Toulouse; Jean-Pierre Changeux; Stanislas Dehaene
One characteristic behaviour of the Hopfield model of neural networks, namely the catastrophic deterioration of the memory due to overloading, is interpreted in simple physical terms. A general formulation allows for an exploration of some basic issues in learning theory. Two learning schemes are constructed, which avoid the overloading deterioration and keep learning and forgetting, with a stationary capacity.
Neural Computation | 2003
David Philipona; J. K. O'Regan; Jean-Pierre Nadal
This letter suggests that in biological organisms, the perceived structure of reality, in particular the notions of body, environment, space, object, and attribute, could be a consequence of an effort on the part of brains to account for the dependency between their inputs and their outputs in terms of a small number of parameters. To validate this idea, a procedure is demonstrated whereby the brain of a (simulated) organism with arbitrary input and output connectivity can deduce the dimensionality of the rigid group of the space underlying its input-output relationship, that is, the dimension of what the organism will call physical space.
Network: Computation In Neural Systems | 1990
J.A. Sirat; Jean-Pierre Nadal
The authors propose a new classifier based on neural network techniques. The ‘network’ consists of a set of perceptrons functionally organized in a binary tree (‘neural tree’). The learning algorithm is inspired from a growth algorithm, the tiling algorithm, recently introduced for feedforward neural networks. As in the former case, this is a constructive algorithm, for which convergence is guaranteed. In the neural tree one distinguishes the structural organization from the functional organization: each neuron of a neural tree receives inputs from, and only from, the input layer; its output does not feed into any other neuron, but is used to propagate down a decision tree. The main advantage of this approach is due to the local processing in restricted portions of input space, during both learning and classification. Moreover, only a small subset of neurons have to be updated during the classification stage. Finally, this approach is easily and efficiently extended to classification in a multiclass probl...
Trends in Neurosciences | 2007
Boris Barbour; Nicolas Brunel; Vincent Hakim; Jean-Pierre Nadal
Much research effort into synaptic plasticity has been motivated by the idea that modifications of synaptic weights (or strengths or efficacies) underlie learning and memory. Here, we examine the possibility of exploiting the statistics of experimentally measured synaptic weights to deduce information about the learning process. Analysing distributions of synaptic weights requires a theoretical framework to interpret the experimental measurements, but the results can be unexpectedly powerful, yielding strong constraints on possible learning theories as well as information that is difficult to obtain by other means, such as the information storage capacity of a cell. We review the available experimental and theoretical techniques as well as important open issues.
Cognition | 2006
Sharon Peperkamp; Rozenn Le Calvez; Jean-Pierre Nadal; Emmanuel Dupoux
Phonological rules relate surface phonetic word forms to abstract underlying forms that are stored in the lexicon. Infants must thus acquire these rules in order to infer the abstract representation of words. We implement a statistical learning algorithm for the acquisition of one type of rule, namely allophony, which introduces context-sensitive phonetic variants of phonemes. This algorithm is based on the observation that different realizations of a single phoneme typically do not appear in the same contexts (ideally, they have complementary distributions). In particular, it measures the discrepancies in context probabilities for each pair of phonetic segments. In Experiment 1, we test the algorithms performances on a pseudo-language and show that it is robust to statistical noise due to sampling and coding errors, and to non-systematic rule application. In Experiment 2, we show that a natural corpus of semiphonetically transcribed child-directed speech in French presents a very large number of near-complementary distributions that do not correspond to existing allophonic rules. These spurious allophonic rules can be eliminated by a linguistically motivated filtering mechanism based on a phonetic representation of segments. We discuss the role of a priori linguistic knowledge in the statistical learning of phonology.
International Journal of Neural Systems | 1989
Jean-Pierre Nadal
We study an algorithm for a feedforward network which is similar in spirit to the Tiling algorithm recently introduced: the hidden units are added one by one until the network performs the desired task, and convergence is guaranteed. The difference is in the architecture of the network, which is more constrained here. Numerical tests show performances similar to that of the Tiling algorithm, although the total number of couplings in general grows faster.