Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sven Serneels is active.

Publication


Featured researches published by Sven Serneels.


Computational Statistics & Data Analysis | 2008

Principal component regression for data containing outliers and missing elements

Sven Serneels; Tim Verdonck

Two approaches are presented to perform principal component analysis (PCA) on data which contain both outlying cases and missing elements. At first an eigendecomposition of a covariance matrix which can deal with such data is proposed, but this approach is not fit for data where the number of variables exceeds the number of cases. Alternatively, an expectation robust (ER) algorithm is proposed so as to adapt the existing methodology for robust PCA to data containing missing elements. According to an extensive simulation study, the ER approach performs well for all data sizes concerned. Using simulations and an example, it is shown that by virtue of the ER algorithm, the properties of the existing methods for robust PCA carry through to data with missing elements.


Reference Module in Chemistry, Molecular Sciences and Chemical Engineering#R##N#Comprehensive Chemometrics#R##N#Chemical and Biochemical Data Analysis | 2009

Robust Multivariate Methods in Chemometrics

Peter Filzmoser; Sven Serneels; Ricardo A. Maronna; P.J. Van Espen

This chapter presents an introduction to robust statistics with applications of a chemometric nature. Following a description of the basic ideas and concepts behind robust statistics, including how robust estimators can be conceived, the chapter builds up to the construction (and use) of robust alternatives for some methods for multivariate analysis frequently used in chemometrics, such as principal component analysis and partial least squares. The chapter then provides an insight into how these robust methods can be used or extended to classification. To conclude, the issue of validation of the results is being addressed: It is shown how uncertainty statements associated with robust estimates can be obtained.


Chemometrics and Intelligent Laboratory Systems | 2015

Sparse partial robust M regression

Irene Hoffmann; Sven Serneels; Peter Filzmoser; Christophe Croux

Sparse partial robust M regression is introduced as a new regression method. It is the first dimension reduction and regression algorithm that yields estimates with a partial least squares alike interpretability that are sparse and robust with respect to both vertical outliers and leverage points. A simulation study underpins these claims. Real data examples illustrate the validity of the approach.


GfKl | 2006

Robust Multivariate Methods: The Projection Pursuit Approach

Peter Filzmoser; Sven Serneels; Christophe Croux; Pierre J. Van Espen

Projection pursuit was originally introduced to identify structures in multivariate data clouds (Huber, 1985). The idea of projecting data to a low-dimensional subspace can also be applied to multivariate statistical methods. The robustness of the methods can be achieved by applying robust estimators to the lower-dimensional space. Robust estimation in high dimensions can thus be avoided which usually results in a faster computation. Moreover, flat data sets where the number of variables is much higher than the number of observations can be easier analyzed in a robust way.


Journal of Chemometrics | 2016

Sparse and robust PLS for binary classification

Irene Hoffmann; Peter Filzmoser; Sven Serneels; Kurt Varmuza

Partial robust M regression (PRM), as well as its sparse counterpart sparse PRM, have been reported to be regression methods that foster a partial least squares‐alike interpretation while having good robustness and efficiency properties, as well as a low computational cost. In this paper, the partial robust M discriminant analysis classifier is introduced, which consists of dimension reduction through an algorithm closely related to PRM and a consecutive robust discriminant analysis in the latent variable space. The method is further generalized to sparse partial robust M discriminant analysis by introducing a sparsity penalty on the estimated direction vectors. Thereby, an intrinsic variable selection is achieved, which yields a better graphical interpretation of the results, as well as more precise coefficient estimates, in case the data contain uninformative variables. Both methods are robust against leverage points within each class, as well as against adherence outliers (points that have been assigned a wrong class label). A simulation study investigates the effect of outliers, wrong class labels, and uninformative variables on the proposed methods and its classical PLS counterparts and corroborates the robustness and sparsity claims. The utility of the methods is demonstrated on data from mass spectrometry analysis (time‐of‐flight secondary ion mass spectrometry) of meteorite samples. Copyright


29th Annual Conference of the German Classification Society, March 9-11, 2005, Otto Guericke University Magdeburg, Magdeburg, Germany | 2006

The partial robust M-approach

Sven Serneels; Christophe Croux; Peter Filzmoser; Pierre J. Van Espen

The PLS approach is a widely used technique to estimate path models relating various blocks of variables measured from the same population. It is frequently applied in the social sciences and in economics. In this type of applications, deviations from normality and outliers may occur, leading to an efficiency loss or even biased results. In the current paper, a robust path model estimation technique is being proposed, the partial robust M (PRM) approach. In an example its benefits are illustrated.


Statistics and Computing | 2018

Outlyingness: Which variables contribute most?

Michiel Debruyne; Sebastiaan Höppner; Sven Serneels; Tim Verdonck

Outlier detection is an inevitable step to most statistical data analyses. However, the mere detection of an outlying case does not always answer all scientific questions associated with that data point. Outlier detection techniques, classical and robust alike, will typically flag the entire case as outlying, or attribute a specific case weight to the entire case. In practice, particularly in high dimensional data, the outlier will most likely not be outlying along all of its variables, but just along a subset of them. If so, the scientific question why the case has been flagged as an outlier becomes of interest. In this article, a fast and efficient method is proposed to detect variables that contribute most to an outlier’s outlyingness. Thereby, it helps the analyst understand why an outlier lies out. The approach pursued in this work is to estimate the univariate direction of maximal outlyingness. It is shown that the problem of estimating that direction can be rewritten as the normed solution of a classical least squares regression problem. Identifying the subset of variables contributing most to outlyingness, can thus be achieved by estimating the associated least squares problem in a sparse manner. From a practical perspective, sparse partial least squares (SPLS) regression, preferably by the fast sparse NIPALS (SNIPLS) algorithm, is suggested to tackle that problem. The proposed methodology is illustrated to perform well both on simulated data and real life examples.


Chemometrics and Intelligent Laboratory Systems | 2007

TOMCAT: A MATLAB toolbox for multivariate calibration techniques

M. Daszykowski; Sven Serneels; Krzysztof Kaczmarek; Piet Van Espen; Christophe Croux; B. Walczak


Chemometrics and Intelligent Laboratory Systems | 2005

Partial robust M-regression

Sven Serneels; Christophe Croux; Peter Filzmoser; Pierre J. Van Espen


Chemometrics and Intelligent Laboratory Systems | 2005

Robust continuum regression

Sven Serneels; Peter Filzmoser; Christophe Croux; Pierre J. Van Espen

Collaboration


Dive into the Sven Serneels's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christophe Croux

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Peter Filzmoser

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tim Verdonck

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Irene Hoffmann

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Leoni

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

B. Walczak

University of Silesia in Katowice

View shared research outputs
Researchain Logo
Decentralizing Knowledge