Pierre Roussel-Ragot
École Normale Supérieure
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pierre Roussel-Ragot.
Neural Computation | 1993
O. Nerrand; Pierre Roussel-Ragot; L. Personnaz; Gérard Dreyfus
The paper proposes a general framework that encompasses the training of neural networks and the adaptation of filters. We show that neural networks can be considered as general nonlinear filters that can be trained adaptively, that is, that can undergo continual training with a possibly infinite number of time-ordered examples. We introduce the canonical form of a neural network. This canonical form permits a unified presentation of network architectures and of gradient-based training algorithms for both feedforward networks (transversal filters) and feedback networks (recursive filters). We show that several algorithms used classically in linear adaptive filtering, and some algorithms suggested by other authors for training neural networks, are special cases in a general classification of training algorithms for feedback networks.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 1990
Pierre Roussel-Ragot; Gérard Dreyfus
The proposed implementation is guaranteed to exhibit the same convergence behavior as the serial algorithm. Two models of parallelization, depending on the value of the temperature, are introduced and statistical models which can predict the speedup for any problem (as a function of the acceptance rate and of the number of processors), are derived. The performances are evaluated on a simple placement problem with a transputer-based network, and the models are compared with experiments. >
International Journal of Circuit Theory and Applications | 1992
Sylvie Marcos; Odile Macchi; Christophe Vignat; Gérard Dreyfus; L. Personnaz; Pierre Roussel-Ragot
In this paper we present in a unified framework the gradient algorithms employed in the adaptation of linear time filters (TF) and the supervised training of (non-linear) neural networks (NN). the optimality criteria used to optimize the parameters H of the filter or network are the least squares (LS) and least mean squares (LMS) in both contexts. They respectively minimize the total or the mean squares of the error e(k) between an (output) reference sequence d(k) and the actual system output y(k) corresponding to the input X(k). Minimization is performed iteratively by a gradient algorithm. the index k in (TF) is time and it runs indefinitely. Thus iterations start as soon as reception of X(k) begins. the recursive algorithm for the adaptation H(k – 1) H(k) of the parameters is implemented each time a new input X(k) is observed. When training a (NN) with a finite number of examples, the index k denotes the example and it is upper-bounded. Iterative (block) algorithms wait until all K examples are received to begin the network updating. However, K being frequently very large, recursive algorithms are also often preferred in (NN) training, but they raise the question of ordering the examples X(k). Except in the specific case of a transversal filter, there is no general recursive technique for optimizing the LS criterion. However, X(k) is normally a random stationary sequence; thus LS and LMS are equivalent when k becomes large. Moreover, the LMS criterion can always be minimized recursively with the help of the stochastic LMS gradient algorithm, which has low computational complexity. In (TF), X(k) is a sliding window of (time) samples, whereas in the supervised training of (NN) with arbitrarily ordered examples, X(k – 1) and X(k) have nothing to do with each other. When this (major) difference is rubbed out by plugging a time signal at the network input, the recursive algorithms recently developed for (NN) training become similar to those of adaptive filtering. In this context the present paper displays the similarities between adaptive cascaded linear filters and trained multilayer networks. It is also shown that there is a close similarity between adaptive recursive filters and neural networks including feedback loops. The classical filtering approach is to evaluate the gradient by ‘forward propagation’, whereas the most popular (NN) training method uses a gradient backward propagation method. We show that when a linear (TF) problem is implemented by an (NN), the two approaches are equivalent. However, the backward method can be used for more general (non-linear) filtering problems. Conversely, new insights can be drawn in the (NN) context by the use of a gradient forward computation. The advantage of the (NN) framework, and in particular of the gradient backward propagation approach, is evidently to have a much larger spectrum of applications than (TF), since (i) the inputs are arbitrary and (ii) the (NN) can perform non-linear (TF).
ieee workshop on neural networks for signal processing | 1994
D. Urbani; Pierre Roussel-Ragot; L. Personnaz; Gérard Dreyfus
A procedure for the selection of neural models of dynamical processes is presented. It uses statistical tests at various levels of model reduction, in order to provide optimal tradeoffs between accuracy and parsimony. The efficiency of the method is illustrated by the modeling of a highly nonlinear NARX process.<<ETX>>
international symposium on neural networks | 1991
O. Nerrand; Pierre Roussel-Ragot; L. Personnaz; Gérard Dreyfus; Sylvie Marcos; Odile Macchi; Christophe Vignat
There are a wide variety of cost functions, techniques for estimating their gradient, and adaptive algorithms for updating the coefficients of neural networks used as nonlinear adaptive filters. The authors discuss the various algorithms which result from various choices of criteria and of gradient estimation techniques. New algorithms are introduced, and the relations between the present work, the real-time recurrent learning algorithm, and the teacher forcing technique are discussed. The authors show that the training algorithms suggested recently for feedback networks are very closely related to and in some cases identical to the algorithms used classically for adapting recursive filters.<<ETX>>
Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop | 1992
Gérard Dreyfus; O. Macchi; S. Marcos; O. Nerrand; L. Personnaz; Pierre Roussel-Ragot; D. Urbani; C. Vignat
The authors propose a general framework which encompasses the training of neural networks and the adaptation of filters. It is shown that neural networks can be considered as general nonlinear filters which can be trained adaptively, i.e., which can undergo continual training. A unified view of gradient-based training algorithms for feedback networks is proposed, which gives rise to new algorithms. The use of some of these algorithms is illustrated by examples of nonlinear adaptive filtering and process identification.<<ETX>>
parallel problem solving from nature | 1990
Pierre Roussel-Ragot; Nadia Kouicem; Gérard Dreyfus
Simulated Annealing is a powerful optimization method which has been applied to a number of industrial problems; however, it often requires large computation times. One way to overcome this drawback is the design of finely tuned cooling schedules, which may ensure a fast convergence of the sequential algorithm towards near-optimal solutions. Another way of decreasing the computation time is the use of parallel implementations. Obviously, both approaches can be combined; however, this is possible only if the parallel implementation of the algorithm exhibits the same convergence behaviour as the sequential one. In the present paper, we show that the previously proposed parallel algorithms deviate from the sequential simulated annealing algorithm, and we suggest a problem-independent parallel implementation which is guaranteed to exhibit the same convergence behaviour as the sequential one. We introduce various modes of parallelization, depending on the value of the acceptance rate, and we derive statistical models which can predict the speedup for any problem, as a function of the acceptance rate, of the number of processes and of the time caracteristics of the annealing. The performances are evaluated on a simple placement problem with a Transputer-based network, and the analytical models are compared to experiments.
IEEE Transactions on Neural Networks | 1994
Olivier Nerrand; Pierre Roussel-Ragot; Dominique Urbani; L. Personnaz; Gérard Dreyfus
arXiv: Computer Vision and Pattern Recognition | 2015
Aurore Jaumard-Hakoun; Kele Xu; Pierre Roussel-Ragot; Gérard Dreyfus; Maureen Stone; Bruce Denby
TS. Traitement du signal | 1991
Sylvie Marcos; Pierre Roussel-Ragot; L. Personnaz; O. Nerrand; Gérard Dreyfus; Christophe Vignat