Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric Séverin is active.

Publication


Featured researches published by Eric Séverin.


Neurocomputing | 2014

Bankruptcy prediction using Extreme Learning Machine and financial expertise

Qi Yu; Yoan Miche; Eric Séverin; Amaury Lendasse

Bankruptcy prediction has been widely studied as a binary classification problem using financial ratios methodologies. In this paper, Leave-One-Out-Incremental Extreme Learning Machine (LOO-IELM) is explored for this task. LOO-IELM operates in an incremental way to avoid inefficient and unnecessary calculations and stops automatically with the neurons of which the number is unknown. Moreover, Combo method and further Ensemble model are investigated based on different LOO-IELM models and the specific financial indicators. These indicators are chosen using different strategies according to the financial expertise. The entire process has shown its good performance with a very fast speed, and also helps to interpret the model and the special ratios.


Neural Networks | 2014

Long-term time series prediction using OP-ELM

Alexander Grigorievskiy; Yoan Miche; Anne-Mari Ventelä; Eric Séverin; Amaury Lendasse

In this paper, an Optimally Pruned Extreme Learning Machine (OP-ELM) is applied to the problem of long-term time series prediction. Three known strategies for the long-term time series prediction i.e. Recursive, Direct and DirRec are considered in combination with OP-ELM and compared with a baseline linear least squares model and Least-Squares Support Vector Machines (LS-SVM). Among these three strategies DirRec is the most time consuming and its usage with nonlinear models like LS-SVM, where several hyperparameters need to be adjusted, leads to relatively heavy computations. It is shown that OP-ELM, being also a nonlinear model, allows reasonable computational time for the DirRec strategy. In all our experiments, except one, OP-ELM with DirRec strategy outperforms the linear model with any strategy. In contrast to the proposed algorithm, LS-SVM behaves unstably without variable selection. It is also shown that there is no superior strategy for OP-ELM: any of three can be the best. In addition, the prediction accuracy of an ensemble of OP-ELM is studied and it is shown that averaging predictions of the ensemble can improve the accuracy (Mean Square Error) dramatically.


decision support systems | 2011

Predicting corporate bankruptcy using a self-organizing map: An empirical study to improve the forecasting horizon of a financial failure model

Philippe du Jardin; Eric Séverin

The aim of this study is to show how a Kohonen map can be used to increase the forecasting horizon of a financial failure model. Indeed, most prediction models fail to forecast accurately the occurrence of failure beyond 1year, and their accuracy tends to fall as the prediction horizon recedes. So we propose a new way of using a Kohonen map to improve model reliability. Our results demonstrate that the generalization error achieved with a Kohonen map remains stable over the period studied, unlike that of other methods, such as discriminant analysis, logistic regression, neural networks and survival analysis, traditionally used for this kind of task.


Neurocomputing | 2014

Ensemble delta test-extreme learning machine (DT-ELM) for regression

Qi Yu; Mark van Heeswijk; Yoan Miche; Rui Nian; Bo He; Eric Séverin; Amaury Lendasse

Extreme learning machine (ELM) has shown its good performance in regression applications with a very fast speed. But there is still a difficulty to compromise between better generalization performance and smaller complexity of the ELM (a number of hidden nodes). This paper proposes a method called Delta Test-ELM (DT-ELM), which operates in an incremental way to create less complex ELM structures and determines the number of hidden nodes automatically. It uses Bayesian Information Criterion (BIC) as well as Delta Test (DT) to restrict the search as well as to consider the size of the network and prevent overfitting. Moreover, ensemble modeling is used on different DT-ELM models and it shows good test results in Experiments section.


Neurocomputing | 2010

OPELM and OPKNN in long-term prediction of time series using projected input data

Duÿsan Sovilj; Antti Sorjamaa; Qi Yu; Yoan Miche; Eric Séverin

Long-term time series prediction is a difficult task. This is due to accumulation of errors and inherent uncertainties of a long-term prediction, which leads to deteriorated estimates of the future instances. In order to make accurate predictions, this paper presents a methodology that uses input processing before building the model. Input processing is a necessary step due to the curse of dimensionality, where the aim is to reduce the number of input variables or features. In the paper, we consider the combination of the delta test and the genetic algorithm to obtain two aspects of reduction: scaling and projection. After input processing, two fast models are used to make the predictions: optimally pruned extreme learning machine and optimally pruned k-nearest neighbors. Both models have fast training times, which makes them suitable choice for direct strategy for long-term prediction. The methodology is tested on three different data sets: two time series competition data sets and one financial data set.


Neurocomputing | 2015

SOM-ELM-Self-Organized Clustering using ELM

Yoan Miche; Anton Akusok; David Veganzones; Kaj-Mikael Björk; Eric Séverin; Philippe du Jardin; Maite Termenon; Amaury Lendasse

This paper presents two new clustering techniques based on Extreme Learning Machine (ELM). These clustering techniques can incorporate a priori knowledge (of an expert) to define the optimal structure for the clusters, i.e. the number of points in each cluster. Using ELM, the first proposed clustering problem formulation can be rewritten as a Traveling Salesman Problem and solved by a heuristic optimization method. The second proposed clustering problem formulation includes both a priori knowledge and a self-organization based on a predefined map (or string). The clustering methods are successfully tested on 5 toy examples and 2 real datasets.


Neurocomputing | 2010

Self organizing maps in corporate finance: Quantitative and qualitative analysis of debt and leasing

Eric Séverin

This article deals with the usefulness of self organizing maps in the area of corporate finance. The application of neural networks improve bankruptcy forecasting by showing a relationship between capital structure and corporate performance. Our results suggest the pertinence of the Kohonen algorithm applied to qualitative variables. These results allow us to question scoring models. In a larger framework, the methodology of Kohonen allowed a better perception of the factors able to explain the financing of leasing. The objective of our research is here to explain the factors of the choice between leasing and banking loans.


workshop on self organizing maps | 2009

Sparse Linear Combination of SOMs for Data Imputation: Application to Financial Database

Antti Sorjamaa; Francesco Corona; Yoan Miche; Paul Merlin; Bertrand Maillet; Eric Séverin; Amaury Lendasse

This paper presents a new methodology for missing value imputation in a database. The methodology combines the outputs of several Self-Organizing Maps in order to obtain an accurate filling for the missing values. The maps are combined using MultiResponse Sparse Regression and the Hannan-Quinn Information Criterion. The new combination methodology removes the need for any lengthy cross-validation procedure, thus speeding up the computation significantly. Furthermore, the accuracy of the filling is improved, as demonstrated in the experiments.


Neurocomputing | 2015

MD-ELM

Anton Akusok; David Veganzones; Yoan Miche; Kaj-Mikael Björk; Philippe du Jardin; Eric Séverin; Amaury Lendasse

This paper proposes a methodology for identifying data samples that are likely to be mislabeled in a c-class classification problem (dataset). The methodology relies on an assumption that the generalization error of a model learned from the data decreases if a label of some mislabeled sample is changed to its correct class. A general classification model used in the paper is OP-ELM; it also provides a fast way to estimate the generalization error by PRESS Leave-One-Out. It is tested on two toy datasets, as well as on real life datasets for one of which expert knowledge about the identified potential mislabels has been sought.


Advances in Artificial Neural Systems | 2010

OP-KNN: method and applications

Qi Yu; Yoan Miche; Antti Sorjamaa; Alberto Guillén; Amaury Lendasse; Eric Séverin

This paper presents a methodology named Optimally Pruned K-Nearest Neighbors (OP-KNNs) which has the advantage of competing with state-of-the-art methods while remaining fast. It builds a one hidden-layer feedforward neural network using K-Nearest Neighbors as kernels to perform regression. Multiresponse Sparse Regression (MRSR) is used in order to rank each kth nearest neighbor and finally Leave-One-Out estimation is used to select the optimal number of neighbors and to estimate the generalization performances. Since computational time of this method is small, this paper presents a strategy using OP-KNN to perform Variable Selection which is tested successfully on eight real-life data sets from different application fields. In summary, the most significant characteristic of this method is that it provides good performance and a comparatively simple model at extremely high-learning speed.

Collaboration


Dive into the Eric Séverin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoan Miche

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoan Miche

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Dominique Dufour

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar

Philippe Luu

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar

Antti Sorjamaa

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yves Mard

University of Auvergne

View shared research outputs
Researchain Logo
Decentralizing Knowledge