Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Álvaro Barbero is active.

Publication


Featured researches published by Álvaro Barbero.


Journal of Neuroengineering and Rehabilitation | 2010

Biased feedback in brain-computer interfaces

Álvaro Barbero; Moritz Grosse-Wentrup

Even though feedback is considered to play an important role in learning how to operate a brain-computer interface (BCI), to date no significant influence of feedback design on BCI-performance has been reported in literature. In this work, we adapt a standard motor-imagery BCI-paradigm to study how BCI-performance is affected by biasing the belief subjects have on their level of control over the BCI system. Our findings indicate that subjects already capable of operating a BCI are impeded by inaccurate feedback, while subjects normally performing on or close to chance level may actually benefit from an incorrect belief on their performance level. Our results imply that optimal feedback design in BCIs should take into account a subjects current skill level.


european conference on machine learning | 2008

On the Equivalence of the SMO and MDM Algorithms for SVM Training

Jorge López; Álvaro Barbero; José R. Dorronsoro

SVM training is usually discussed under two different algorithmic points of view. The first one is provided by decomposition methods such as SMO and SVMLight while the second one encompasses geometric methods that try to solve a Nearest Point Problem (NPP), the Gilbert---Schlesinger---Kozinec (GSK) and Mitchell---Demyanov---Malozemov (MDM) algorithms being the most representative ones. In this work we will show that, indeed, both approaches are essentially coincident. More precisely, we will show that a slight modification of SMO in which at each iteration both updating multipliers correspond to patterns in the same class solves NPP and, moreover, that this modification coincides with an extended MDM algorithm. Besides this, we also propose a new way to apply the MDM algorithm for NPP problems over reduced convex hulls.


international conference on artificial neural networks | 2013

Group Fused Lasso

Carlos M. Alaíz; Álvaro Barbero; José R. Dorronsoro

We introduce the Group Total Variation (GTV) regularizer, a modification of Total Variation that uses the l2,1 norm instead of the l1 one to deal with multidimensional features. When used as the only regularizer, GTV can be applied jointly with iterative convex optimization algorithms such as FISTA. This requires to compute its proximal operator which we derive using a dual formulation. GTV can also be combined with a Group Lasso (GL) regularizer, leading to what we call Group Fused Lasso (GFL) whose proximal operator can now be computed combining the GTV and GL proximals through Dykstra algorithm. We will illustrate how to apply GFL in strongly structured but ill-posed regression problems as well as the use of GTV to denoise colour images.


international symposium on neural networks | 2011

Momentum Sequential Minimal Optimization: An accelerated method for Support Vector Machine training

Álvaro Barbero; José R. Dorronsoro

Sequential Minimal Optimization (SMO) can be regarded as the state-of-the-art approach in non-linear Support Vector Machines training, being the method of choice in the successful LIBSVM software. Its optimization procedure is based on updating only a couple of the problem coefficients per iteration, until convergence. In this paper we notice that this strategy can be interpreted as finding the sparsest yet most useful updating direction per iteration. We present a modification of SMO including a new approximate momentum term in the updating direction which captures information from previous updates, and show that this term presents a trade-off between sparsity and suitability of the chosen direction. We show how this novelty is able to provide substantial savings in practice in SMOs number of iterations to convergence, without increasing noticeably its cost per iteration. We study when this saving in iterates can result in a reduced SVM training times, and the behavior of this new technique when combined with caching and shrinking strategies.


Neurocomputing | 2009

Cycle-breaking acceleration of SVM training

Álvaro Barbero; Jorge López; José R. Dorronsoro

Fast SVM training is an important goal for which many proposals have been given in the literature. In this work we will study from a geometrical point of view the presence, in both the Mitchell-Demyanov-Malozemov (MDM) algorithm and Platts Sequential Minimal Optimization, of training cycles, that is, the repeated selection of some concrete updating patterns. We shall see how to take advantage of these cycles by partially collapsing them in a single updating vector that gives better minimizing directions. We shall numerically illustrate the resulting procedure, showing that it can lead to substantial savings in the number of iterations and kernel operations for both algorithms.


international conference on artificial neural networks | 2010

Faster directions for second order SMO

Álvaro Barbero; José R. Dorronsoro

Second order SMO represents the state-of-the-art in SVM training for moderate size problems. In it, the solution is attained by solving a series of subproblems which are optimized w.r.t just a pair of multipliers. In this paper we will illustrate how SMO works in a two stage fashion, setting first the values of the bounded multipliers to the penalty factor C and proceeding then to adjust the non-bounded multipliers. Furthermore, during this second stage the selected pairs for update often appear repeatedly during the algorithm. Taking advantage of this, we shall propose a procedure to combine previously used descent directions that results in much fewer iterations in this second stage and that may also lead to noticeable savings in kernel operations.


international conference on artificial neural networks | 2011

Momentum acceleration of least-squares support vector machines

Jorge López; Álvaro Barbero; José R. Dorronsoro

Least-Squares Support Vector Machines (LS-SVMs) have been a successful alternative model for classification and regression Support Vector Machines (SVMs), and used in a wide range of applications. In spite of this, only a limited effort has been realized to design efficient algorithms for the training of this class of models, in clear contrast to the vast amount of contributions of this kind in the field of classic SVMs. In this work we propose to combine the popular Sequential Minimal Optimization (SMO) method with a momentum strategy that manages to reduce the number of iterations required for convergence, while requiring little additional computational effort per iteration, especially in those situations where the standard SMO algorithm for LS-SVMs fails to obtain fast solutions.


international conference of the ieee engineering in medicine and biology society | 2009

Implicit Wiener series analysis of epileptic seizure recordings

Álvaro Barbero; Matthias O. Franz; Wim van Drongelen; José R. Dorronsoro; Bernhard Schölkopf; Moritz Grosse-Wentrup

Implicit Wiener series are a powerful tool to build Volterra representations of time series with any degree of non-linearity. A natural question is then whether higher order representations yield more useful models. In this work we shall study this question for ECoG data channel relationships in epileptic seizure recordings, considering whether quadratic representations yield more accurate classifiers than linear ones. To do so we first show how to derive statistical information on the Volterra coefficient distribution and how to construct seizure classification patterns over that information. As our results illustrate, a quadratic model seems to provide no advantages over a linear one. Nevertheless, we shall also show that the interpretability of the implicit Wiener series provides insights into the inter-channel relationships of the recordings.


international symposium on neural networks | 2012

Sparse methods for wind energy prediction

Carlos M. Alaíz; Álvaro Barbero; José R. Dorronsoro

In this work we will analyze and apply to the prediction of wind energy some of the best known regularized linear regression algorithms, such as Ordinary Least Squares, Ridge Regression and, particularly, Lasso, Group Lasso and Elastic-Net that also seek to impose a certain degree of sparseness on the final models. To achieve this goal, some of them introduce a non-differentiable regularization term that requires special techniques to solve the corresponding optimization problem that will yield the final model. Proximal Algorithms have been recently introduced precisely to handle this kind of optimization problems, and so we will briefly review how to apply them in regularized linear regression. Moreover, the proximal method FISTA will be used when applying the non-differentiable models to the problem of predicting the global wind energy production in Spain, using as inputs numerical weather forecasts for the entire Iberian peninsula. Our results show how some of the studied sparsity-inducing models are able to produce a coherent selection of features, attaining similar performance to a baseline model using expert information, while making use of less data features.


ambient intelligence | 2009

A Simple Maximum Gain Algorithm for Support Vector Regression

Álvaro Barbero; José R. Dorronsoro

Shevades et al. Modification 2 is one of the most widely used algorithms to build Support Vector Regression (SVR) models. It selects as a size 2 working set the index pair giving the maximum KKT violation and combines it with the updating heuristics of Smola and Scholkopf enforcing at each training iteration a

Collaboration


Dive into the Álvaro Barbero's collaboration.

Top Co-Authors

Avatar

José R. Dorronsoro

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Jorge López

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Carlos M. Alaíz

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge