Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hugues Bersini is active.

Publication


Featured researches published by Hugues Bersini.


International Journal of Control | 1999

Lazy Learning for Local Modelling and Control Design

Gianluca Bontempi; Mauro Birattari; Hugues Bersini

This paper presents local methods for modelling and control of discrete-time unknown non-linear dynamical systems, when only input-output data are available. We propose the adoption of lazy learning, a memory-based technique for local modelling. The modelling procedure uses a query-based approach to select the best model configuration by assessing and comparing different alternatives. A new recursive technique for local model identification and validation is presented, together with an enhanced statistical method for model selection. A lso, three methods to design controllers based on the local linearization provided by the lazy learning algorithm are described. In the first method the lazy technique returns the forward and inverse models of the system which are used to compute the control action to take. The second is an indirect method inspired by self-tuning regulators where recursive least squares estimation is replaced by a local approximator. The third method combines the linearization provided by t...


world congress on computational intelligence | 1994

Hybridizing genetic algorithms with hill-climbing methods for global optimization: two possible ways

Jean-Michel Renders; Hugues Bersini

Two methods of hybridizing genetic algorithms (GA) with hill-climbing for global optimization are investigated. The first one involves two interwoven levels of optimization-evolution (GA) and individual learning (hill-climbing)-which cooperate in the global optimization process. The second one consists of modifying a GA by the introduction of new genetic operators or by the alteration of traditional ones in such a way that these new operators capture the basic mechanisms of hill-climbing. The simplex-GA is one of the possibilities explained and tested. These two methods are applied and compared for the maximization of complex functions defined in high-dimensional real space.<<ETX>>


world congress on computational intelligence | 1994

Recurrent fuzzy systems

Vittorio Gorrini; Hugues Bersini

Besides their linguistic interface, we believe fuzzy controllers not only to be universal approximators but also more general and efficient than their similar neural counterparts: radial basis functions. Consequently like recurrent neural networks, this paper aims at extending the fuzzy controllers approximation capacity to dynamic processes of unknown order. We propose a new type of architecture called recurrent fuzzy system together with a learning algorithm for adapting the membership functions.<<ETX>>


formal methods | 2001

The local paradigm for modeling and control: from neuro-fuzzy to lazy learning

Gianluca Bontempi; Hugues Bersini; Mauro Birattari

The composition of simple local models for approximating complex nonlinear mappings is a common practice in recent modeling and control literature. This paper presents a comparative analysis of two di,erent local approaches: the neuro-fuzzy inference system and the lazy learning approach. Neuro-fuzzy is a hybrid representation which combines the linguistic description typical of fuzzy inference systems, with learning procedures inspired by neural networks. Lazy learning is a memory-based technique that uses a query-based approach to select the best local model con0guration by assessing and comparing di,erent alternatives in cross-validation. In this paper, the two approaches are compared both as learning algorithms, and as identi0cation modules of an adaptive control system. We show that lazy learning is able to provide better modeling accuracy and higher control performance at the cost of a reduced readability of the resulting approximator. Illustrative examples of identi0cation and control of a nonlinear system starting from simulated data are given. c 2001 Elsevier Science B.V. All rights reserved.


Fuzzy Sets and Systems | 1997

Now comes the time to defuzzify neuro-fuzzy models

Hugues Bersini; Gianluca Bontempi

Abstract Fuzzy models present a singular Janus-face: On the one hand, they are knowledge-based software environments constructed from a collection of linguistic IF-THEN rules, and on the other hand, they realize nonlinear mappings which have interesting mathematical properties like “low-order interpolation” and “universal function approximation”. Neuro-fuzzy basically provides fuzzy models with the capacity, based on the available data, to compensate for the missing human knowledge by an automatic self-tuning of the structure and the parameters. A first consequence of this hybridization between the architectural and representational aspect of fuzzy models and the learning mechanisms of neural networks has been to progressively increase and fuzzify the contrast between the two Janus faces: readability or performance.


Neural Networks | 2002

The connections between the frustrated chaos and the intermittency chaos in small Hopfield networks

Hugues Bersini; Pierre Sener

In a previous paper we introduced the notion of frustrated chaos occurring in Hopfield networks [Neural Networks 11 (1998) 1017]. It is a dynamical regime which appears in a network when the global structure is such that local connectivity patterns responsible for stable oscillatory behaviors are intertwined, leading to mutually competing attractors and unpredictable itinerancy among brief appearance of these attractors. Frustration destabilizes the network and provokes an erratic wavering among the orbits that characterize the same network when it is connected in a non-frustrated way. In this paper, through a detailed study of the bifurcation diagram given for some connection weights, we will show that this frustrated chaos belongs to the family of intermittency type of chaos, first described by Berge et al. [Order within chaos (1984)] and Pomeau and Manneville [Commun. Math. Phys. 74 (1980) 189]. Indeed, the transition to chaos is a critical one, and all along the bifurcation diagram, in any chaotic window, the duration of the intermittent cycles, between two chaotic bursts, grows as an invert ratio of the connection weight. Specific to this regime are the intermittent cycles easily identifiable as the non-frustrated regimes obtained by altering the values of these same connection weights. We will more specifically show that anywhere in the bifurcation diagram, a chaotic window always lies between two oscillatory regimes, and that the resulting chaos is a merging of, among others, the cycles at both ends. The strength (i.e. the duration of its oscillatory phase before the chaotic burst) of the first cycle decreases while the regime tends to stabilize into the second cycle (with the strength of this second cycle increasing) that will finally get the control. Since in our study, the bifurcation diagram concerns the same connection weights responsible for the learning mechanism of the Hopfield network, we will discuss the relations existing between bifurcation, learning and control of chaos. We will show that, in some cases, the addition of a slower Hebbian learning mechanism onto the Hopfield networks makes the resulting global dynamics to drive the network into a stable oscillatory regime, through a succession of intermittent and quasiperiodic regimes. Finally, we will present a series of possible logical steps to manually construct a frustrated network.


Neural Networks | 1998

The frustrated and compositional nature of chaos in small Hopfield networks

Hugues Bersini

Frustration in a network described by a set of ordinary differential equations induces chaos when the global structure is such that local connectivity patterns responsible for stable oscillatory behaviours are intertwined, leading to mutually competing attractors and unpredictable itinerancy among brief appearance of these attractors. Frustration destabilizes the network and provokes an erratic wavering among the periodic saddle orbits which characterize the same network when it is connected in a non-frustrated way. The characterization of chaos as some form of unpredictable wavering among repelling oscillators is rather classical but the originality here lies in the identification of these oscillators as the stable regimes of the non-frustrated network. In this paper, a simple and small 6-neuron Hopfield network is treated, observed and analyzed in its chaotic regime. Given a certain choice of the network parameters, chaos occurs when connecting the network in a specific way (said to be frustrated) and gives place to oscillatory regimes by suppressing whatever connection between two neurons. The compositional nature of the chaotic attractor as a succession of brief appearances of orbits (or parts of orbits) associated with the non-frustrated networks is evidenced by relying on symbolic dynamics, through the computation of Lyapunov exponents, and by computing the autocorrelation coefficients and the spectrum.


european conference on machine learning | 1998

Recursive Lazy Learning for Modeling and Control

Gianluca Bontempi; Mauro Birattari; Hugues Bersini

This paper presents a local method for modeling and control of non-linear dynamical systems from input-output data. The proposed methodology couples a local model identification inspired by the lazy learning technique, with a control strategy based on linear optimal control theory. The local modeling procedure uses a query-based approach to select the best model configuration by assessing and comparing different alternatives. A new recursive technique for local model identification and validation is presented, together with an enhanced statistical method for model selection. The control method combines the linearization provided by the local learning techniques with optimal linear control theory, to control non-linear systems in configurations which are far from equilibrium. Simulations of the identification of a non-linear benchmark model and of the control of a complex non-linear system (the bioreactor) are presented. The experimental results show that the approach can obtain better performance than neural networks in identification and control, even using smaller training data sets.


FEBS Letters | 2003

Integration and cross-validation of high-throughput gene expression data: comparing heterogeneous data sets.

Vincent Detours; Jacques Emile Dumont; Hugues Bersini; Carine Maenhaut

Data analysis – not data production – is becoming the bottleneck in gene expression research. Data integration is necessary to cope with an ever increasing amount of data, to cross‐validate noisy data sets, and to gain broad interdisciplinary views of large biological data sets. New Internet resources may help researchers to combine data sets across different gene expression platforms. However, noise and disparities in experimental protocols strongly limit data integration. A detailed review of four selected studies reveals how some of these limitations may be circumvented and illustrates what can be achieved through data integration.


Computer Methods in Applied Mechanics and Engineering | 2003

Parametrical mechanical design with constraints and preferences: application to a purge valve

R. Filomeno Coelho; Hugues Bersini; Ph. Bouillard

Abstract In the design of mechanical structures, the evolutionary algorithms have taken a more and more important place, mostly because of their ability to explore widely the design space. Furthermore, as several objectives are often pursued simultaneously in industrial applications, multiobjective optimization has become a wide area of research in recent years. However, only a few methods integrate a multicriteria decision aid approach to reflect the user’s preferences since the beginning of the search process. In this paper, PROMETHEE II, an outranking method developed in the operational research field, has been implemented in an evolutionary algorithm. Furthermore, as the handling of the constraints is very critical, an original technique called PAMUC ( Preferences Applied to MUltiobjectivity and Constraints ) is proposed to tackle simultaneously the constrained and multiobjective aspects. It has been validated on standard test cases, and applied to the design optimization of two valves of the Vinci engine (from launcher Ariane 5). Results analyzed thanks to the R1 norm introduced by Hansen and Jaszkiewicz show that PAMUC outperforms the classical weighted-sum method (combined with a dynamic penalty-based technique to handle the constraints), and therefore seem to be more appropriate to reflect the user’s preferences.

Collaboration


Dive into the Hugues Bersini's collaboration.

Top Co-Authors

Avatar

Gianluca Bontempi

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar

Mauro Birattari

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tom Lenaerts

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francisco J. Varela

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Vittorio Gorrini

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar

Marco Saerens

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Emma Hart

Edinburgh Napier University

View shared research outputs
Top Co-Authors

Avatar

Antoine Duchateau

Université libre de Bruxelles

View shared research outputs
Researchain Logo
Decentralizing Knowledge