Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gérard Dreyfus is active.

Publication


Featured researches published by Gérard Dreyfus.


Neurocomputing | 1998

Training wavelet networks for nonlinear dynamic input–output modeling

Yacine Oussar; Isabelle Rivals; L. Personnaz; Gérard Dreyfus

Abstract In the framework of nonlinear process modeling, we propose training algorithms for feedback wavelet networks used as nonlinear dynamic models. An original initialization procedure is presented that takes the locality of the wavelet functions into account. Results obtained for the modeling of several processes are presented; a comparison with networks of neurons with sigmoidal functions is performed.


Neural Computation | 1993

Neural networks and nonlinear adaptive filtering: unifying concepts and new algorithms

O. Nerrand; Pierre Roussel-Ragot; L. Personnaz; Gérard Dreyfus

The paper proposes a general framework that encompasses the training of neural networks and the adaptation of filters. We show that neural networks can be considered as general nonlinear filters that can be trained adaptively, that is, that can undergo continual training with a possibly infinite number of time-ordered examples. We introduce the canonical form of a neural network. This canonical form permits a unified presentation of network architectures and of gradient-based training algorithms for both feedforward networks (transversal filters) and feedback networks (recursive filters). We show that several algorithms used classically in linear adaptive filtering, and some algorithms suggested by other authors for training neural networks, are special cases in a general classification of training algorithms for feedback networks.


IEEE Transactions on Neural Networks | 1992

Handwritten digit recognition by neural networks with single-layer training

Stefan Knerr; L. Personnaz; Gérard Dreyfus

It is shown that neural network classifiers with single-layer training can be applied efficiently to complex real-world classification problems such as the recognition of handwritten digits. The STEPNET procedure, which decomposes the problem into simpler subproblems which can be solved by linear separators, is introduced. Provided appropriate data representations and learning rules are used, performance comparable to that obtained by more complex networks can be achieved. Results from two different databases are presented: an European database comprising 8700 isolated digits and a zip code database from the US Postal Service comprising 9000 segmented digits. A hardware implementation of the classifier is briefly described.


Neurocomputing | 2000

Initialization by Selection for Wavelet Network Training

Yacine Oussar; Gérard Dreyfus

Abstract We present an original initialization procedure for the parameters of feedforward wavelet networks, prior to training by gradient-based techniques. It takes advantage of wavelet frames stemming from the discrete wavelet transform, and uses a selection method to determine a set of best wavelets whose centers and dilation parameters are used as initial values for subsequent training. Results obtained for the modeling of two simulated processes are compared to those obtained with a heuristic initialization procedure, and the effectiveness of the proposed method is demonstrated.


Neural Computation | 2002

Local overfitting control via leverages

Gaétan Monari; Gérard Dreyfus

We present a novel approach to dealing with overfitting in black box models. It is based on the leverages of the samples, that is, on the influence that each observation has on the parameters of the model. Since overfitting is the consequence of the model specializing on specific data points during training, we present a selection method for nonlinear models based on the estimation of leverages and confidence intervals. It allows both the selection among various models of equivalent complexities corresponding to different minima of the cost function (e.g., neural nets with the same number of hidden units) and the selection among models having different complexities (e.g., neural nets with different numbers of hidden units). A complete model selection methodology is derived.


Neural Networks | 2007

A machine learning approach to the analysis of time-frequency maps, and its application to neural dynamics

François B. Vialatte; Claire Martin; Rémi Dubois; Joëlle Haddad; Brigitte Quenet; Rémi Gervais; Gérard Dreyfus

The statistical analysis of experimentally recorded brain activity patterns may require comparisons between large sets of complex signals in order to find meaningful similarities and differences between signals with large variability. High-level representations such as time-frequency maps convey a wealth of useful information, but they involve a large number of parameters that make statistical investigations of many signals difficult at present. In this paper, we describe a method that performs drastic reduction in the complexity of time-frequency representations through a modelling of the maps by elementary functions. The method is validated on artificial signals and subsequently applied to electrophysiological brain signals (local field potential) recorded from the olfactory bulb of rats while they are trained to recognize odours. From hundreds of experimental recordings, reproducible time-frequency events are detected, and relevant features are extracted, which allow further information processing, such as automatic classification.


IEEE Transactions on Biomedical Engineering | 2014

Bimodal BCI Using Simultaneously NIRS and EEG

Yohei Tomita; François B. Vialatte; Gérard Dreyfus; Yasue Mitsukura; Hovagim Bakardjian; Andrzej Cichocki

Although noninvasive brain-computer interfaces (BCI) based on electroencephalographic (EEG) signals have been studied increasingly over the recent decades, their performance is still limited in two important aspects. First, the difficulty of performing a reliable detection of BCI commands increases when EEG epoch length decreases, which makes high information transfer rates difficult to achieve. Second, the BCI system often misclassifies the EEG signals as commands, although the subject is not performing any task. In order to circumvent these limitations, the hemodynamic fluctuations in the brain during stimulation with steady-state visual evoked potentials (SSVEP) were measured using near-infrared spectroscopy (NIRS) simultaneously with EEG. BCI commands were estimated based on responses to a flickering checkerboard (ON-period). Furthermore, an “idle” command was generated from the signal recorded by the NIRS system when the checkerboard was not flickering (OFF-period). The joint use of EEG and NIRS was shown to improve the SSVEP classification. For 13 subjects, the relative improvement in error rates obtained by using the NIRS signal, for nine classes including the “idle” mode, ranged from 85% to 53 %, when the epoch length increase from 3 to 12 s. These results were obtained from only one EEG and one NIRS channel. The proposed bimodal NIRS-EEG approach, including detection of the idle mode, may make current BCI systems faster and more reliable.


Neural Networks | 2001

How to be a gray box: dynamic semi-physical modeling

Yacine Oussar; Gérard Dreyfus

A general methodology for gray-box, or semi-physical, modeling is presented. This technique is intended to combine the best of two worlds: knowledge-based modeling, whereby mathematical equations are derived in order to describe a process, based on a physical (or chemical, biological, etc.) analysis, and black-box modeling, whereby a parameterized model is designed, whose parameters are estimated solely from measurements made on the process. The gray-box modeling technique is very valuable whenever a knowledge-based model exists, but is not fully satisfactory and cannot be improved by further analysis (or can only be improved at a very large computational cost). We describe the design methodology of a gray-box model, and illustrate it on a didactic example. We emphasize the importance of the choice of the discretization scheme used for transforming the differential equations of the knowledge-based model into a set of discrete-time recurrent equations. Finally, an application to a real, complex industrial process is presented.


EPL | 1987

High-Order Neural Networks: Information Storage without Errors

L. Personnaz; I. Guyon; Gérard Dreyfus

A new learning rule is derived, which allows the perfect storage and the retrieval of information and sequences, in neural networks exhibiting high-order interactions between some or all neurons. Such interactions increase the storage capacity of the networks and allow to solve a class of problems which were intractable with standard networks. We show that it is possible to restrict the amount of high-order interactions while improving the attractivity of the stored patterns.


Journal of Chemical Information and Computer Sciences | 1998

Toward a principled methodology for neural network design and performance evaluation in QSAR. Application to the prediction of logP.

Arthur F. Duprat; T. Huynh; Gérard Dreyfus

The prediction of properties of molecules from their structure (QSAR) is basically a nonlinear regression problem. Neural networks are proven to be parsimonious universal approximators of nonlinear functions; therefore, they are excellent candidates for performing the nonlinear regression tasks involved in QSAR. However, their full potential can be exploited only in the framework of a rigorous approach. In the present paper, we describe a principled methodology for designing neural networks for QSAR and estimating their performances, and we apply this approach to the prediction of logP. We compare our results to those obtained on the same molecules by other methods.

Collaboration


Dive into the Gérard Dreyfus's collaboration.

Top Co-Authors

Avatar

J. Lewiner

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

L. Personnaz

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Didier Perino

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Thomas Hueber

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Brigitte Quenet

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge