Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amaury Lendasse is active.

Publication


Featured researches published by Amaury Lendasse.


IEEE Transactions on Neural Networks | 2010

OP-ELM: Optimally Pruned Extreme Learning Machine

Yoan Miche; Antti Sorjamaa; Patrick Bas; Olli Simula; Christian Jutten; Amaury Lendasse

In this brief, the optimally pruned extreme learning machine (OP-ELM) methodology is presented. It is based on the original extreme learning machine (ELM) algorithm with additional steps to make it more robust and generic. The whole methodology is presented in detail and then applied to several regression and classification problems. Results for both computational time and accuracy (mean square error) are compared to the original ELM and to three other widely used methodologies: multilayer perceptron (MLP), support vector machine (SVM), and Gaussian process (GP). As the experiments for both regression and classification illustrate, the proposed OP-ELM methodology performs several orders of magnitude faster than the other algorithms used in this brief, except the original ELM. Despite the simplicity and fast performance, the OP-ELM is still able to maintain an accuracy that is comparable to the performance of the SVM. A toolbox for the OP-ELM is publicly available online.


Neurocomputing | 2007

Methodology for long-term prediction of time series

Antti Sorjamaa; Jin Hao; Nima Reyhani; Yongnan Ji; Amaury Lendasse

In this paper, a global methodology for the long-term prediction of time series is proposed. This methodology combines direct prediction strategy and sophisticated input selection criteria: k-nearest neighbors approximation method (k-NN), mutual information (MI) and nonparametric noise estimation (NNE). A global input selection strategy that combines forward selection, backward elimination (or pruning) and forward-backward selection is introduced. This methodology is used to optimize the three input selection criteria (k-NN, MI and NNE). The methodology is successfully applied to a real life benchmark: the Poland Electricity Load dataset.


Chemometrics and Intelligent Laboratory Systems | 2006

Mutual information for the selection of relevant variables in spectrometric nonlinear modelling

Fabrice Rossi; Amaury Lendasse; Damien François; Vincent Wertz; Michel Verleysen

Data from spectrophotometers form vectors of a large number of exploitable variables. Building quantitative models using these variables most often requires using a smaller set of variables than the initial one. Indeed, a too large number of input variables to a model results in a too large number of parameters, leading to overfitting and poor generalization abilities. In this paper, we suggest the use of the mutual information measure to select variables from the initial set. The mutual information measures the information content in input variables with respect to the model output, without making any assumption on the model that will be used; it is thus suitable for nonlinear modelling. In addition, it leads to the selection of variables among the initial set, and not to linear or nonlinear combinations of them. Without decreasing the model performances compared to other variable projection methods, it allows therefore a greater interpretability of the results.


Neurocomputing | 2004

Nonlinear projection with curvilinear distances: Isomap versus curvilinear distance analysis

John Aldo Lee; Amaury Lendasse; Michel Verleysen

Dimension reduction techniques are widely used for the analysis and visualization of complex sets of data. This paper compares two recently published methods for nonlinear projection: Isomap and Curvilinear Distance Analysis (CDA). Contrarily to the traditional linear PCA, these methods work like multidimensional scaling, by reproducing in the projection space the pairwise distances measured in the data space. However, they di6er from the classical linear MDS by the metrics they use and by the way they build the mapping (algebraic or neural). While Isomap relies directly on the traditional MDS, CDA is based on a nonlinear variant of MDS, called Curvilinear Component Analysis (CCA). Although Isomap and CDA share the same metric, the comparison highlights their respective strengths and weaknesses.


Neurocomputing | 2011

TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization

Yoan Miche; Mark van Heeswijk; Patrick Bas; Olli Simula; Amaury Lendasse

In this paper an improvement of the optimally pruned extreme learning machine (OP-ELM) in the form of a L2 regularization penalty applied within the OP-ELM is proposed. The OP-ELM originally proposes a wrapper methodology around the extreme learning machine (ELM) meant to reduce the sensitivity of the ELM to irrelevant variables and obtain more parsimonious models thanks to neuron pruning. The proposed modification of the OP-ELM uses a cascade of two regularization penalties: first a L1 penalty to rank the neurons of the hidden layer, followed by a L2 penalty on the regression weights (regression between hidden layer and output layer) for numerical stability and efficient pruning of the neurons. The new methodology is tested against state of the art methods such as support vector machines or Gaussian processes and the original ELM and OP-ELM, on 11 different data sets; it systematically outperforms the OP-ELM (average of 27% better mean square error) and provides more reliable results – in terms of standard deviation of the results – while remaining always less than one order of magnitude slower than the OP-ELM.


Neurocomputing | 2011

GPU-Accelerated and Parallelized ELM Ensembles for Large-scale Regression

Mark van Heeswijk; Yoan Miche; Erkki Oja; Amaury Lendasse

Abstract The paper presents an approach for performing regression on large data sets in reasonable time, using an ensemble of extreme learning machines (ELMs). The main purpose and contribution of this paper are to explore how the evaluation of this ensemble of ELMs can be accelerated in three distinct ways: (1) training and model structure selection of the individual ELMs are accelerated by performing these steps on the graphics processing unit (GPU), instead of the processor (CPU); (2) the training of ELM is performed in such a way that computed results can be reused in the model structure selection, making training plus model structure selection more efficient; (3) the modularity of the ensemble model is exploited and the process of model training and model structure selection is parallelized across multiple GPU and CPU cores, such that multiple models can be built at the same time. The experiments show that competitive performance is obtained on the regression tasks, and that the GPU-accelerated and parallelized ELM ensemble achieves attractive speedups over using a single CPU. Furthermore, the proposed approach is not limited to a specific type of ELM and can be employed for a large variety of ELMs.


IEEE Access | 2015

High-Performance Extreme Learning Machines: A Complete Toolbox for Big Data Applications

Anton Akusok; Kaj-Mikael Björk; Yoan Miche; Amaury Lendasse

This paper presents a complete approach to a successful utilization of a high-performance extreme learning machines (ELMs) Toolbox for Big Data. It summarizes recent advantages in algorithmic performance; gives a fresh view on the ELM solution in relation to the traditional linear algebraic performance; and reaps the latest software and hardware performance achievements. The results are applicable to a wide range of machine learning problems and thus provide a solid ground for tackling numerous Big Data challenges. The included toolbox is targeted at enabling the full potential of ELMs to the widest range of users.


international conference on artificial neural networks | 2008

OP-ELM: Theory, Experiments and a Toolbox

Yoan Miche; Antti Sorjamaa; Amaury Lendasse

This paper presents the Optimally-Pruned Extreme Learning Machine (OP-ELM) toolbox. This novel, fast and accurate methodology is applied to several regression and classification problems. The results are compared with widely known Multilayer Perceptron (MLP) and Least-Squares Support Vector Machine (LS-SVM) methods. As the experiments (regression and classification) demonstrate, the OP-ELM methodology is considerably faster than the MLP and the LS-SVM, while maintaining the accuracy in the same level. Finally, a toolbox performing the OP-ELM is introduced and instructions are presented.


international conference on artificial neural networks | 2009

Adaptive Ensemble Models of Extreme Learning Machines for Time Series Prediction

Mark van Heeswijk; Yoan Miche; Tiina Lindh-Knuutila; Peter A. J. Hilbers; Timo Honkela; Erkki Oja; Amaury Lendasse

In this paper, we investigate the application of adaptive ensemble models of Extreme Learning Machines (ELMs) to the problem of one-step ahead prediction in (non)stationary time series. We verify that the method works on stationary time series and test the adaptivity of the ensemble model on a nonstationary time series. In the experiments, we show that the adaptive ensemble model achieves a test error comparable to the best methods, while keeping adaptivity. Moreover, it has low computational cost.


Neurocomputing | 2014

Bankruptcy prediction using Extreme Learning Machine and financial expertise

Qi Yu; Yoan Miche; Eric Séverin; Amaury Lendasse

Bankruptcy prediction has been widely studied as a binary classification problem using financial ratios methodologies. In this paper, Leave-One-Out-Incremental Extreme Learning Machine (LOO-IELM) is explored for this task. LOO-IELM operates in an incremental way to avoid inefficient and unnecessary calculations and stops automatically with the neurons of which the number is unknown. Moreover, Combo method and further Ensemble model are investigated based on different LOO-IELM models and the specific financial indicators. These indicators are chosen using different strategies according to the financial expertise. The entire process has shown its good performance with a very fast speed, and also helps to interpret the model and the special ratios.

Collaboration


Dive into the Amaury Lendasse's collaboration.

Top Co-Authors

Avatar

Michel Verleysen

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Yoan Miche

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Rui Nian

Ocean University of China

View shared research outputs
Top Co-Authors

Avatar

Yoan Miche

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Bo He

Ocean University of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vincent Wertz

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antti Sorjamaa

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kaj-Mikael Björk

Arcada University of Applied Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge