Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoan Miche is active.

Publication


Featured researches published by Yoan Miche.


IEEE Transactions on Neural Networks | 2010

OP-ELM: Optimally Pruned Extreme Learning Machine

Yoan Miche; Antti Sorjamaa; Patrick Bas; Olli Simula; Christian Jutten; Amaury Lendasse

In this brief, the optimally pruned extreme learning machine (OP-ELM) methodology is presented. It is based on the original extreme learning machine (ELM) algorithm with additional steps to make it more robust and generic. The whole methodology is presented in detail and then applied to several regression and classification problems. Results for both computational time and accuracy (mean square error) are compared to the original ELM and to three other widely used methodologies: multilayer perceptron (MLP), support vector machine (SVM), and Gaussian process (GP). As the experiments for both regression and classification illustrate, the proposed OP-ELM methodology performs several orders of magnitude faster than the other algorithms used in this brief, except the original ELM. Despite the simplicity and fast performance, the OP-ELM is still able to maintain an accuracy that is comparable to the performance of the SVM. A toolbox for the OP-ELM is publicly available online.


international conference on artificial neural networks | 2008

OP-ELM: Theory, Experiments and a Toolbox

Yoan Miche; Antti Sorjamaa; Amaury Lendasse

This paper presents the Optimally-Pruned Extreme Learning Machine (OP-ELM) toolbox. This novel, fast and accurate methodology is applied to several regression and classification problems. The results are compared with widely known Multilayer Perceptron (MLP) and Least-Squares Support Vector Machine (LS-SVM) methods. As the experiments (regression and classification) demonstrate, the OP-ELM methodology is considerably faster than the MLP and the LS-SVM, while maintaining the accuracy in the same level. Finally, a toolbox performing the OP-ELM is introduced and instructions are presented.


international conference on artificial neural networks | 2009

Adaptive Ensemble Models of Extreme Learning Machines for Time Series Prediction

Mark van Heeswijk; Yoan Miche; Tiina Lindh-Knuutila; Peter A. J. Hilbers; Timo Honkela; Erkki Oja; Amaury Lendasse

In this paper, we investigate the application of adaptive ensemble models of Extreme Learning Machines (ELMs) to the problem of one-step ahead prediction in (non)stationary time series. We verify that the method works on stationary time series and test the adaptivity of the ensemble model on a nonstationary time series. In the experiments, we show that the adaptive ensemble model achieves a test error comparable to the best methods, while keeping adaptivity. Moreover, it has low computational cost.


acm multimedia | 2006

A feature selection methodology for steganalysis

Yoan Miche; Benoit Roue; Amaury Lendasse; Patrick Bas

This paper presents a methodology to select features before training a classifier based on Support Vector Machines (SVM). In this study 23 features presented in [1] are analysed. A feature ranking is performed using a fast classifier called K-Nearest-Neighbours combined with a forward selection. The result of the feature selection is afterward tested on SVM to select the optimal number of features. This method is tested with the Outguess steganographic software and 14 features are selected while keeping the same classification performances. Results confirm that the selected features are efficient for a wide variety of embedding rates. The same methodology is also applied for Steghide and F5 to see if feature selection is possible on these schemes.


international symposium on neural networks | 2008

Long-term prediction of time series using NNE-based projection and OP-ELM

Antti Sorjamaa; Yoan Miche; Robert Weiss; Amaury Lendasse

This paper proposes a combination of methodologies based on a recent development -called Extreme Learning Machine (ELM)- decreasing drastically the training time of nonlinear models. Variable selection is beforehand performed on the original dataset, using the Partial Least Squares (PLS) and a projection based on Nonparametric Noise Estimation (NNE), to ensure proper results by the ELM method. Then, after the network is first created using the original ELM, the selection of the most relevant nodes is performed by using a Least Angle Regression (LARS) ranking of the nodes and a Leave-One-Out estimation of the performances, leading to an Optimally-Pruned ELM (OP-ELM). Finally, the prediction accuracy of the global methodology is demonstrated using the ESTSP 2008 Competition and Poland Electricity Load datasets.


Neurocomputing | 2010

OPELM and OPKNN in long-term prediction of time series using projected input data

Duÿsan Sovilj; Antti Sorjamaa; Qi Yu; Yoan Miche; Eric Séverin

Long-term time series prediction is a difficult task. This is due to accumulation of errors and inherent uncertainties of a long-term prediction, which leads to deteriorated estimates of the future instances. In order to make accurate predictions, this paper presents a methodology that uses input processing before building the model. Input processing is a necessary step due to the curse of dimensionality, where the aim is to reduce the number of input variables or features. In the paper, we consider the combination of the delta test and the genetic algorithm to obtain two aspects of reduction: scaling and projection. After input processing, two fast models are used to make the predictions: optimally pruned extreme learning machine and optimally pruned k-nearest neighbors. Both models have fast training times, which makes them suitable choice for direct strategy for long-term prediction. The methodology is tested on three different data sets: two time series competition data sets and one financial data set.


Eurasip Journal on Information Security | 2009

Reliable Steganalysis Using a Minimum Set of Samples and Features

Yoan Miche; Patrick Bas; Amaury Lendasse; Christian Jutten; Olli Simula

This paper proposes to determine a sufficient number of images for reliable classification and to use feature selection to select most relevant features for achieving reliable steganalysis. First dimensionality issues in the context of classification are outlined, and the impact of the different parameters of a steganalysis scheme (the number of samples, the number of features, the steganography method, and the embedding rate) is studied. On one hand, it is shown that, using Bootstrap simulations, the standard deviation of the classification results can be very important if too small training sets are used; moreover a minimum of 5000 images is needed in order to perform reliable steganalysis. On the other hand, we show how the feature selection process using the OP-ELM classifier enables both to reduce the dimensionality of the data and to highlight weaknesses and advantages of the six most popular steganographic algorithms.


international conference on artificial neural networks | 2013

Extreme learning machine: a robust modeling technique? yes!

Amaury Lendasse; Anton Akusok; Olli Simula; Francesco Corona; Mark van Heeswijk; Emil Eirola; Yoan Miche

In this paper is described the original (basic) Extreme Learning Machine (ELM). Properties like robustness and sensitivity to variable selection are studied. Several extensions of the original ELM are then presented and compared. Firstly, Tikhonov-Regularized Optimally-Pruned Extreme Learning Machine (TROP-ELM) is summarized as an improvement of the Optimally-Pruned Extreme Learning Machine (OP-ELM) in the form of a L2 regularization penalty applied within the OP-ELM. Secondly, a Methodology to Linearly Ensemble ELM (ELM-ELM) is presented in order to improve the performance of the original ELM. These methodologies (TROP-ELM and ELM-ELM) are tested against state of the art methods such as Support Vector Machines or Gaussian Processes and the original ELM and OP-ELM, on ten different data sets. A specific experiment to test the sensitivity of these methodologies to variable selection is also presented.


workshop on self organizing maps | 2009

Sparse Linear Combination of SOMs for Data Imputation: Application to Financial Database

Antti Sorjamaa; Francesco Corona; Yoan Miche; Paul Merlin; Bertrand Maillet; Eric Séverin; Amaury Lendasse

This paper presents a new methodology for missing value imputation in a database. The methodology combines the outputs of several Self-Organizing Maps in order to obtain an accurate filling for the missing values. The maps are combined using MultiResponse Sparse Regression and the Hannan-Quinn Information Criterion. The new combination methodology removes the need for any lengthy cross-validation procedure, thus speeding up the computation significantly. Furthermore, the accuracy of the filling is improved, as demonstrated in the experiments.


ambient intelligence | 2009

Efficient Parallel Feature Selection for Steganography Problems

Alberto Guillén; Antti Sorjamaa; Yoan Miche; Amaury Lendasse; Ignacio Rojas

The steganography problem consists of the identification of images hiding a secret message, which cannot be seen by visual inspection. This problem is nowadays becoming more and more important since the World Wide Web contains a large amount of images, which may be carrying a secret message. Therefore, the task is to design a classifier, which is able to separate the genuine images from the non-genuine ones. However, the main obstacle is that there is a large number of variables extracted from each image and the high dimensionality makes the feature selection mandatory in order to design an accurate classifier. This paper presents a new efficient parallel feature selection algorithm based on the Forward-Backward Selection algorithm. The results will show how the parallel implementation allows to obtain better subsets of features that allow the classifiers to be more accurate.

Collaboration


Dive into the Yoan Miche's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antti Sorjamaa

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Olli Simula

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Jutten

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge