Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gaetan Libert is active.

Publication


Featured researches published by Gaetan Libert.


IEEE Transactions on Biomedical Engineering | 1996

A dynamic neural network identification of electromyography and arm trajectory relationship during complex movements

Guy Cheron; Jean-Philippe Draye; M Bourgeios; Gaetan Libert

The authors propose a new approach based on dynamic recurrent neural networks (DRNN) to identify, in human, the relationship between the muscle electromyographic (EMG) activity and the arm kinematics during the drawing of the figure eight using an extended arm. After learning, the DRNN simulations showed the efficiency of the model. The authors demonstrated its generalization ability to draw unlearned movements. They developed a test of its physiological plausibility by computing the error velocity vectors when small artificial lesions in the EMG signals were created. These lesion experiments demonstrated that the DRNN has identified the preferential direction of the physiological action of the studied muscles. The network also identified neural constraints such as the covariation between geometrical and kinematics parameters of the movement. This suggests that the information of raw EMG signals is largely representative of the kinematics stored in the central motor pattern. Moreover, the DRNN approach will allow one to dissociate the feedforward command (central motor pattern) and the feedback effects from muscles, skin and joints.


systems man and cybernetics | 1996

Dynamic recurrent neural networks: a dynamical analysis

Jean-Philippe Draye; Davor Pavisic; Guy Cheron; Gaetan Libert

In this paper, we explore the dynamical features of a neural network model which presents two types of adaptative parameters: the classical weights between the units and the time constants associated with each artificial neuron. The purpose of this study is to provide a strong theoretical basis for modeling and simulating dynamic recurrent neural networks. In order to achieve this, we study the effect of the statistical distribution of the weights and of the time constants on the network dynamics and we make a statistical analysis of the neural transformation. We examine the network power spectra (to draw some conclusions over the frequential behaviour of the network) and we compute the stability regions to explore the stability of the model. We show that the network is sensitive to the variations of the mean values of the weights and the time constants (because of the temporal aspects of the learned tasks). Nevertheless, our results highlight the improvements in the network dynamics due to the introduction of adaptative time constants and indicate that dynamic recurrent neural networks can bring new powerful features in the field of neural computing.


conference on decision and control | 1992

Kalman filter algorithm based on singular value decomposition

L. Wang; Gaetan Libert; Pierre Manneback

An algorithm for the discrete time linear filtering problem is developed. The crucial component of this algorithm involves the computation of the singular value decomposition (SVD) of an unsymmetric matrix without explicitly forming its left factor, which has a high dimension. The algorithm has good numerical stability and can handle correlated measurement noise without any additional transformation. Since the algorithm is formulated in the form of vector-matrix and matrix-matrix operations, it is also useful for parallel computers. A numerical example is given.<<ETX>>


Biological Cybernetics | 1997

Emergence of clusters in the hidden layer of a dynamic recurrent neural network.

Jean-Philippe Draye; Guy Chéron; Gaetan Libert; Emile Godaux

Abstract. The neural integrator of the oculomotor system is a privileged field for artificial neural network simulation. In this paper, we were interested in an improvement of the biologically plausible features of the Arnold-Robinson network. This improvement was done by fixing the sign of the connection weights in the network (in order to respect the biological Dales Law). We also introduced a notion of distance in the network in the form of transmission delays between its units. These modifications necessitated the introduction of a general supervisor in order to train the network to act as a leaky integrator. When examining the lateral connection weights of the hidden layer, the distribution of the weights values was found to exhibit a conspicuous structure: the high-value weights were grouped in what we call clusters. Other zones are quite flat and characterized by low-value weights. Clusters are defined as particular groups of adjoining neurons which have strong and privileged connections with another neighborhood of neurons. The clusters of the trained network are reminiscent of the small clusters or patches that have been found experimentally in the nucleus prepositus hypoglossi, where the neural integrator is located. A study was conducted to determine the conditions of emergence of these clusters in our network: they include the fixation of the weight sign, the introduction of a distance, and a convergence of the information from the hidden layer to the motoneurons. We conclude that this spontaneous emergence of clusters in artificial neural networks, performing a temporal integration, is due to computational constraints, with a restricted space of solutions. Thus, information processing could induce the emergence of iterated patterns in biological neural networks.


Proceedings IWISP '96#R##N#4–7 November 1996, Manchester, United Kingdom | 1996

Predicting a Chaotic Time Series Using a Dynamical Recurrent Neural Network

Roberto Teran; Jean-Philippe Draye; Davor D. Pavisic; Gustavo Calderon; Gaetan Libert

Publisher Summary Recurrent Neural Networks have appeared showing a better performance compared with traditional or feedforward networks. Recurrent Neural Networks are able to learn attractor dynamics, and they can store information for later use. This chapter presents two kinds of recurrent neural networks for time series forecasting. They both associate a time constant to each neuron. Moreover, dynamic recurrent neural networks (DRNN) are able to enhance static recurrent neural networks (SRNN) capabilities, specially handling with time dependent problems or temporal tasks. The main difference between DRNN and SRNN is that DRNN use an adaptive time constant associated to each neuron. These time constants act as a linear filter, and consider DRNN as a FIR network, but with recurrent connections. SRNN have limited storage capabilities and they may be inappropriate to deal with confusing time series. DRNN have more parameters than SRNN, hence to implement dynamical systems with chaotic behavior, one may train the network using a proper algorithm. Therefore, such network is trained using a modified version of the standard backpropagation algorithm called time dependent recurrent backpropagation.


artificial intelligence in medicine in europe | 1997

Improved Identification of the Human Shoulder Kinematics with Muscle Biological Filters

Jean-Philippe Draye; Guy Cheron; Davor D. Pavisic; Gaetan Libert

In this paper, we introduce new refinements to the approach based on dynamic recurrent neural networks (DRNN) to identify, in humans, the relationship between the muscle electromyographic (EMG) activity and the arm kinematics during the drawing of the figure eight using an extended arm. This method of identification allows to clearly interpret the role of each muscle in any particular movement.


Neurocomputing | 1997

An inhibitory weight initialization improves the speed and quality of recurrent neural networks learning

Jean-Philippe Draye; Davor Pavisic; Guy Cheron; Gaetan Libert

Abstract We consider here the impact of the initial weight distribution on the network behavior. The convention in the neural field is to choose initial weights with uniform distribution between plus and minus α, usually α set to 0.5 or less. In this paper we explore the effect that a negative initial weight distribution has on the learning time and learning quality of recurrent neural networks. First, we will shortly introduce the recurrent models and the learning algorithms used in this research. Experimental results will then be presented to highlight the dependency of the learning phase with the initial weight mean value. We will also use a frequential analysis to quantify the influence of initial weight mean value on the dynamical characteristics of recurrent networks. We will offer a statistical analysis of the neural transformation to show mathematically that a negative mean initial weight distribution has a great positive impact on the network behavior.


Neural Processing Letters | 1995

Adaptative time constants improve the prediction capability of recurrent neural networks

Jean-Philippe Draye; Davor Pavisic; Guy Cheron; Gaetan Libert

Classical statistical techniques for prediction reach their limitations in applications with nonlinearities in the data set; nevertheless, neural models can counteract these limitations. In this paper, we present a recurrent neural model where we associate an adaptative time constant to each neuron-like unit and a learning algorithm to train these dynamic recurrent networks. We test the network by training it to predict the Mackey-Glass chaotic signal. To evaluate the quality of the prediction, we computed the power spectra of the two signals and computed the associated fractional error. Results show that the introduction of adaptative time constants associated to each neuron of a recurrent network improves the quality of the prediction and the dynamical features of a neural model. The performance of such dynamic recurrent neural networks outperform time-delay neural networks.


European Journal of Operational Research | 1984

An automatic procedure for Box-Jenkins model building

Gaetan Libert

Abstract CAPRI is a fully automatic procedure which quickly builds Box-Jenkins model and satisfactory forecasts. Its main peculiarity relies on the selection of the most recent values of the series for which all the statistical assumptions are checked. The fitted model can also be controlled and updated when new data are available.


Annals of Operations Research | 1994

Multicriteria approach for intelligent decision support in supervisory control

Xavier Gandibleux; Camille Rosenthal-Sabroux; Gaetan Libert

In the framework of integrated automation, this work concerns the top level of the management and supervision of complex automated systems. When a process is being disturbed, the supervisory function modifies the established production planning, in accordance with different norms and constraints. The operator remains beside the regulated process controls to perform manual operations. The number of potential actions and the conflicting nature of some objectives make his task complex: he must reach quantitative and qualitative objectives with imperfect and temporal information. To assist him, we study a decision support model following a multicriteria approach involving the supervision problem. AI techniques and DSS are used to develop the aid tool. The Spinning Reserve problem encountered by “Electricité de France” is studied and used as support. To test our concepts, we develop the CASTART experimental support based on a synergy between the user, the problem, and the resolution models.

Collaboration


Dive into the Gaetan Libert's collaboration.

Top Co-Authors

Avatar

Jean-Philippe Draye

Faculté polytechnique de Mons

View shared research outputs
Top Co-Authors

Avatar

Davor Pavisic

Faculté polytechnique de Mons

View shared research outputs
Top Co-Authors

Avatar

Guy Cheron

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean-Philippe Draye

Faculté polytechnique de Mons

View shared research outputs
Top Co-Authors

Avatar

Marc Bourgeois

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar

Guy Chéron

University of Mons-Hainaut

View shared research outputs
Top Co-Authors

Avatar

Liang Wang

Faculté polytechnique de Mons

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge