Elia Liitiäinen
Helsinki University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Elia Liitiäinen.
international work-conference on artificial and natural neural networks | 2007
Elia Liitiäinen; Amaury Lendasse; Francesco Corona
The residual variance estimation problem is well-known in statistics and machine learning with many applications for example in the field of nonlinear modelling. In this paper, we show that the problem can be formulated in a general supervised learning context. Emphasis is on two widely used non-parametric techniques known as the Delta test and the Gamma test. Under some regularity assumptions, a novel proof of convergence of the two estimators is formulated and subsequently verified and compared on two meaningful study cases.
Neural Processing Letters | 2008
Elia Liitiäinen; Francesco Corona; Amaury Lendasse
In this paper, the problem of residual variance estimation is examined. The problem is analyzed in a general setting which covers non-additive heteroscedastic noise under non-iid sampling. To address the estimation problem, we suggest a method based on nearest neighbor graphs and we discuss its convergence properties under the assumption of a Hölder continuous regression function. The universality of the estimator makes it an ideal tool in problems with only little prior knowledge available.
Neurocomputing | 2009
Elia Liitiäinen; Michel Verleysen; Francesco Corona; Amaury Lendasse
The problem of residual variance estimation consists of estimating the best possible generalization error obtainable by any model based on a finite sample of data. Even though it is a natural generalization of linear correlation, residual variance estimation in its general form has attracted relatively little attention in machine learning. In this paper, we examine four different residual variance estimators and analyze their properties both theoretically and experimentally to understand better their applicability in machine learning problems. The theoretical treatment differs from previous work by being based on a general formulation of the problem covering also heteroscedastic noise in contrary to previous work, which concentrates on homoscedastic and additive noise. In the second part of the paper, we demonstrate practical applications in input and model structure selection. The experimental results show that using residual variance estimators in these tasks gives good results often with a reduced computational complexity, while the nearest neighbor estimators are simple and easy to implement.
Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences | 2008
Elia Liitiäinen; Amaury Lendasse; Francesco Corona
In this paper, bounds on the mean power-weighted nearest neighbour distance are derived. Previous work concentrates mainly on the infinite sample limit, whereas our bounds hold for any sample size. The results are expected to be of importance, for example in statistical physics, non-parametric statistics and computational geometry, where they are related to the structure of matter as well as properties of statistical estimators and random graphs.
international workshop on machine learning for signal processing | 2009
Elia Liitiäinen; Amaury Lendasse; Francesco Corona
Estimating entropies is important in many fields including statistical physics, machine learning and statistics. While the Shannon logarithmic entropy is the most fundamental, other Rényi entropies are also of importance. In this paper, we derive a bias corrected estimator for a subset of Rényi entropies. The advantage of the estimator is demonstrated via theoretical and experimental considerations.
Neural Processing Letters | 2011
Elia Liitiäinen; Francesco Corona; Amaury Lendasse
In this paper, the effect of dimensionality on the supervised learning of infinitely differentiable regression functions is analyzed. By invoking the Van Trees lower bound, we prove lower bounds on the generalization error with respect to the number of samples and the dimensionality of the input space both in a linear and non-linear context. It is shown that in non-linear problems without prior knowledge, the curse of dimensionality is a serious problem. At the same time, we speculate counter-intuitively that sometimes supervised learning becomes plausible in the asymptotic limit of infinite dimensionality.
IFAC Proceedings Volumes | 2010
Francesco Corona; Elia Liitiäinen; Amaury Lendasse; Roberto Baratti; Lorenzo Sassu
Abstract The Delaunay tessellation and topological regression is a local simplex method for multivariate calibration. The method, developed within computational geometry, has potential for applications in online analytical chemistry and process monitoring. This study proposes a novel approach to perform prediction and extrapolation using Delaunay calibration method. The main property of the proposed extension is the continuity of the estimated regression function also outside the calibration domain. To support the presentation, an application in estimating the aromatic composition in Light Cycle Oil by Near Infrared spectroscopy is discussed.
Computer-aided chemical engineering | 2009
Francesco Corona; Elia Liitiäinen; Amaury Lendasse; Roberto Baratti; Lorenzo Sassu
Abstract The Delaunay tessellation and topological regression is a local simplex method for multivariate calibration. The method, developed within computational geometry has potential for applications in analytical chemistry and process monitoring. This study investigates the applicability of the method for estimating the aromatic composition in Light Cycle Oil (LCO) by Near Infrared (NIR) spectroscopy.
international conference on artificial neural networks | 2006
Elia Liitiäinen; Amaury Lendasse
State-space models offer a powerful modelling tool for time series prediction. However, as most algorithms are not optimized for long-term prediction, it may be hard to achieve good prediction results. In this paper, we investigate Gaussian linear regression filters for parameter estimation in state-space models and we propose new long-term prediction strategies. Experiments using the EM-algorithm for training of nonlinear state-space models show that significant improvements are possible with no additional computational cost.
European Journal of Engineering Education | 2017
Belle Selene Xia; Elia Liitiäinen
ABSTRACT The benefits of using online exercises have been analysed in terms of distance learning, automatic assessment and self-regulated learning. In this study, we have not found a direct proportional relationship between student performance in the course exercises that use online technologies and the exam grades. We see that the average submission rate to these online exercises is not positively correlated with the exercise points. Yet, our results confirm that doing exercises along supports student learning and skill accumulation equipping them with the knowledge of programming. While the student performance in programming courses is affected by factors such as prior background in programming, cognitive skills and the quality of teaching, completing the course exercises via learning-by-doing is an indispensable part of teaching. Based on the student feedback from the course survey, the students are highly satisfied with using online technologies as part of learning.