Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where L. Personnaz is active.

Publication


Featured researches published by L. Personnaz.


Neurocomputing | 1998

Training wavelet networks for nonlinear dynamic input–output modeling

Yacine Oussar; Isabelle Rivals; L. Personnaz; Gérard Dreyfus

Abstract In the framework of nonlinear process modeling, we propose training algorithms for feedback wavelet networks used as nonlinear dynamic models. An original initialization procedure is presented that takes the locality of the wavelet functions into account. Results obtained for the modeling of several processes are presented; a comparison with networks of neurons with sigmoidal functions is performed.


Neural Computation | 1993

Neural networks and nonlinear adaptive filtering: unifying concepts and new algorithms

O. Nerrand; Pierre Roussel-Ragot; L. Personnaz; Gérard Dreyfus

The paper proposes a general framework that encompasses the training of neural networks and the adaptation of filters. We show that neural networks can be considered as general nonlinear filters that can be trained adaptively, that is, that can undergo continual training with a possibly infinite number of time-ordered examples. We introduce the canonical form of a neural network. This canonical form permits a unified presentation of network architectures and of gradient-based training algorithms for both feedforward networks (transversal filters) and feedback networks (recursive filters). We show that several algorithms used classically in linear adaptive filtering, and some algorithms suggested by other authors for training neural networks, are special cases in a general classification of training algorithms for feedback networks.


IEEE Transactions on Neural Networks | 1992

Handwritten digit recognition by neural networks with single-layer training

Stefan Knerr; L. Personnaz; Gérard Dreyfus

It is shown that neural network classifiers with single-layer training can be applied efficiently to complex real-world classification problems such as the recognition of handwritten digits. The STEPNET procedure, which decomposes the problem into simpler subproblems which can be solved by linear separators, is introduced. Provided appropriate data representations and learning rules are used, performance comparable to that obtained by more complex networks can be achieved. Results from two different databases are presented: an European database comprising 8700 isolated digits and a zip code database from the US Postal Service comprising 9000 segmented digits. A hardware implementation of the classifier is briefly described.


IEEE Transactions on Neural Networks | 2000

Nonlinear internal model control using neural networks: application to processes with delay and design issues

Isabelle Rivals; L. Personnaz

We propose a design procedure of neural internal model control systems for stable processes with delay. We show that the design of such nonadaptive indirect control systems necessitates only the training of the inverse of the model deprived from its delay, and that the presence of the delay thus does not increase the order of the inverse. The controller is then obtained by cascading this inverse with a rallying model which imposes the regulation dynamic behavior and ensures the robustness of the stability. A change in the desired regulation dynamic behavior, or an improvement of the stability, can be obtained by simply tuning the rallying model, without retraining the whole model reference controller. The robustness properties of internal model control systems being obtained when the inverse is perfect, we detail the precautions which must be taken for the training of the inverse so that it is accurate in the whole space visited during operation with the process. In the same spirit, we make an emphasis on neural models affine in the control input, whose perfect inverse is derived without training. The control of simulated processes illustrates the proposed design procedure and the properties of the neural internal model control system for processes without and with delay.


IEEE Transactions on Neural Networks | 2003

Neural-network construction and selection in nonlinear modeling

Isabelle Rivals; L. Personnaz

We study how statistical tools which are commonly used independently can advantageously be exploited together in order to improve neural network estimation and selection in nonlinear static modeling. The tools we consider are the analysis of the numerical conditioning of the neural network candidates, statistical hypothesis tests, and cross validation. We present and analyze each of these tools in order to justify at what stage of a construction and selection procedure they can be most useful. On the basis of this analysis, we then propose a novel and systematic construction and selection procedure for neural modeling. We finally illustrate its efficiency through large-scale simulations experiments and real-world modeling problems.


Neural Networks | 2000

Construction of confidence intervals for neural networks based on least squares estimation

Isabelle Rivals; L. Personnaz

We present the theoretical results about the construction of confidence intervals for a nonlinear regression based on least squares estimation and using the linear Taylor expansion of the nonlinear model output. We stress the assumptions on which these results are based, in order to derive an appropriate methodology for neural black-box modeling; the latter is then analyzed and illustrated on simulated and real processes. We show that the linear Taylor expansion of a nonlinear model output also gives a tool to detect the possible ill-conditioning of neural network candidates, and to estimate their performance. Finally, we show that the least squares and linear Taylor expansion based approach compares favorably with other analytic approaches, and that it is an efficient and economic alternative to the nonanalytic and computationally intensive bootstrap methods.


Neural Computation | 1999

On cross validation for model selection

Isabelle Rivals; L. Personnaz

In response to Zhu and Rower (1996), a recent communication (Goutte, 1997) established that leave-one-out cross validation is not subject to the no-free-lunch criticism. Despite this optimistic conclusion, we show here that cross validation has very poor performances for the selection of linear models as compared to classic statistical tests. We conclude that the statistical tests are preferable to cross validation for linear as well as for nonlinear model selection.


EPL | 1987

High-Order Neural Networks: Information Storage without Errors

L. Personnaz; I. Guyon; Gérard Dreyfus

A new learning rule is derived, which allows the perfect storage and the retrieval of information and sequences, in neural networks exhibiting high-order interactions between some or all neurons. Such interactions increase the storage capacity of the networks and allow to solve a class of problems which were intractable with standard networks. We show that it is possible to restrict the amount of high-order interactions while improving the attractivity of the stored patterns.


Neurocomputing | 1998

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models

Isabelle Rivals; L. Personnaz

The extended Kalman filter (EKF) is a well-known tool for the recursive parameter estimation of static and dynamic nonlinear models. In particular, the EKF has been applied to the estimation of the weights of feedforward and recurrent neural network models, i.e. to their training, and shown to be more efficient than recursive and nonrecursive first-order training algorithms; nevertheless, these first applications to the training of neural networks did not fully exploit the potentials of the EKF. In this paper, we analyze the specific influence of the EKF parameters for modeling problems, and propose a variant of this algorithm for the training of feedforward neural models which proves to be very efficient as compared to nonrecursive second-order algorithms. We test the proposed EKF algorithm on several static and dynamic modeling problems, some of them being benchmark problems, and which bring out the properties of the proposed algorithm.


IEEE Transactions on Neural Networks | 1992

Specification and implementation of a digital Hopfield-type associative memory with on-chip training

Anne Johannet; L. Personnaz; Gérard Dreyfus; Jean-Dominique Gascuel; Michel Weinfeld

The definition of the requirements for the design of a neural network associative memory, with on-chip training, in standard digital CMOS technology is addressed. Various learning rules that can be integrated in silicon and the associative memory properties of the resulting networks are investigated. The relationships between the architecture of the circuit and the learning rule are studied in order to minimize the extra circuitry required for the implementation of training. A 64-neuron associative memory with on-chip training has been manufactured, and its future extensions are outlined. Beyond the application to the specific circuit described, the general methodology for determining the accuracy requirements can be applied to other circuits and to other autoassociative memory architectures.

Collaboration


Dive into the L. Personnaz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

I. Guyon

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

O. Nerrand

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Stefan Knerr

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Claudine Masson

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anne Johannet

École Normale Supérieure

View shared research outputs
Researchain Logo
Decentralizing Knowledge