Carlos M. Alaíz
Autonomous University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carlos M. Alaíz.
international conference on artificial neural networks | 2013
Carlos M. Alaíz; Álvaro Barbero; José R. Dorronsoro
We introduce the Group Total Variation (GTV) regularizer, a modification of Total Variation that uses the l2,1 norm instead of the l1 one to deal with multidimensional features. When used as the only regularizer, GTV can be applied jointly with iterative convex optimization algorithms such as FISTA. This requires to compute its proximal operator which we derive using a dual formulation. GTV can also be combined with a Group Lasso (GL) regularizer, leading to what we call Group Fused Lasso (GFL) whose proximal operator can now be computed combining the GTV and GL proximals through Dykstra algorithm. We will illustrate how to apply GFL in strongly structured but ill-posed regression problems as well as the use of GTV to denoise colour images.
Journal of Biomolecular Screening | 2013
Hind Azegrouz; Gopal Karemore; Alberto Torres; Carlos M. Alaíz; Ana M. González; Pedro Nevado; Alvaro Salmerón; Teijo Pellinen; Miguel A. del Pozo; José R. Dorronsoro; María C. Montoya
High-content screening (HCS) allows the exploration of complex cellular phenotypes by automated microscopy and is increasingly being adopted for small interfering RNA genomic screening and phenotypic drug discovery. We introduce a series of cell-based evaluation metrics that have been implemented and validated in a mono-parametric HCS for regulators of the membrane trafficking protein caveolin 1 (CAV1) and have also proved useful for the development of a multiparametric phenotypic HCS for regulators of cytoskeletal reorganization. Imaging metrics evaluate imaging quality such as staining and focus, whereas cell biology metrics are fuzzy logic–based evaluators describing complex biological parameters such as sparseness, confluency, and spreading. The evaluation metrics were implemented in a data-mining pipeline, which first filters out cells that do not pass a quality criterion based on imaging metrics and then uses cell biology metrics to stratify cell samples to allow further analysis of homogeneous cell populations. Use of these metrics significantly improved the robustness of the monoparametric assay tested, as revealed by an increase in Z′ factor, Kolmogorov-Smirnov distance, and strict standard mean difference. Cell biology evaluation metrics were also implemented in a novel supervised learning classification method that combines them with phenotypic features in a statistical model that exceeded conventional classification methods, thus improving multiparametric phenotypic assay sensitivity.
international symposium on neural networks | 2012
Carlos M. Alaíz; Álvaro Barbero; José R. Dorronsoro
In this work we will analyze and apply to the prediction of wind energy some of the best known regularized linear regression algorithms, such as Ordinary Least Squares, Ridge Regression and, particularly, Lasso, Group Lasso and Elastic-Net that also seek to impose a certain degree of sparseness on the final models. To achieve this goal, some of them introduce a non-differentiable regularization term that requires special techniques to solve the corresponding optimization problem that will yield the final model. Proximal Algorithms have been recently introduced precisely to handle this kind of optimization problems, and so we will briefly review how to apply them in regularized linear regression. Moreover, the proximal method FISTA will be used when applying the non-differentiable models to the problem of predicting the global wind energy production in Spain, using as inputs numerical weather forecasts for the entire Iberian peninsula. Our results show how some of the studied sparsity-inducing models are able to produce a coherent selection of features, attaining similar performance to a baseline model using expert information, while making use of less data features.
international conference on artificial neural networks | 2012
Carlos M. Alaíz; Alberto Torres; José R. Dorronsoro
In this work we will apply sparse linear regression methods to forecast wind farm energy production using numerical weather prediction (NWP) features over several pressure levels, a problem where pattern dimension can become very large. We shall place sparse regression in the context of proximal optimization, which we shall briefly review, and we shall show how sparse methods outperform other models while at the same time shedding light on the most relevant NWP features and on their predictive structure.
international symposium on neural networks | 2015
Carlos M. Alaíz; José R. Dorronsoro
In this paper the Generalized Lasso model of R. Tibshirani is extended to consider multidimensional features (or groups of features) à la Group Lasso, by substituting the ℓ1 norm of the regularizer by the ℓ2,1 norm. The resultant model is called Generalized Group Lasso (GenGL), and it contains as particular cases the already known Group Lasso and Group Fused Lasso (GFL), but also new models as the Graph-Guided Group Fused Lasso, or the trend filtering for multidimensional features. We show how to solve them efficiently combining FISTA iterations with the Proximal Operator of the corresponding regularizer, which we compute using a dual formulation. Moreover, GenGL makes possible to introduce a new approach to Group Total Variation, the regularizer of GFL, that results in a training much faster than that of previous methods.
Archive | 2015
Carlos M. Alaíz; Álvaro Barbero; José R. Dorronsoro
We introduce the Group Total Variation (GTV) regularizer, a modification of Total Variation that uses the l 2,1 norm instead of the l 1 one to deal with multidimensional features. When used as the only regularizer, GTV can be applied jointly with iterative convex optimization algorithms such as FISTA. This requires to compute its proximal operator which we derive using a dual formulation. GTV can also be combined with a Group Lasso (GL) regularizer, leading to what we call Group Fused Lasso (GFL) whose proximal operator can now be computed combining the GTV and GL proximals through proximal Dykstra algorithm.We will illustrate how to apply GFL in strongly structured but ill-posed regression problems as well as the use of GTV to denoise colour images.
international conference on artificial neural networks | 2013
Ángela Fernández; Carlos M. Alaíz; Ana M. González; Julia Díaz; José R. Dorronsoro
The prediction and management of wind power ramps is currently receiving large attention as it is a crucial issue for both system operators and wind farm managers. However, this is still an issue far from being solved and in this work we will address it as a classification problem working with delay vectors of the wind power time series and applying local Mahalanobis K-NN search with metrics derived from Anisotropic Diffusion methods. The resulting procedures clearly outperform a random baseline method and yield good sensitivity but more work is needed to improve on specificity and, hence, precision.
CAEPIA'11 Proceedings of the 14th international conference on Advances in artificial intelligence: spanish association for artificial intelligence | 2011
Carlos M. Alaíz; José R. Dorronsoro
In the Echo State Networks (ESN) and, more generally, Reservoir Computing paradigms (a recent approach to recurrent neural networks), linear readout weights, i.e., linear output weights, are the only ones actually learned under training. The standard approach for this is SVD-based pseudo-inverse linear regression. Here it will be compared with two well known on-line filters, Least Minimum Squares (LMS) and Recursive Least Squares (RLS). As we shall illustrate, while LMS performance is not satisfactory, RLS can be a good on-line alternative that may deserve further attention.
Neurocomputing | 2018
Carlos M. Alaíz; Johan A. K. Suykens
This work proposes a new algorithm for training a re-weighted L2 Support Vector Machine (SVM), inspired on the re-weighted Lasso algorithm of Cand\`es et al. and on the equivalence between Lasso and SVM shown recently by Jaggi. In particular, the margin required for each training vector is set independently, defining a new weighted SVM model. These weights are selected to be binary, and they are automatically adapted during the training of the model, resulting in a variation of the Frank-Wolfe optimization algorithm with essentially the same computational complexity as the original algorithm. As shown experimentally, this algorithm is computationally cheaper to apply since it requires less iterations to converge, and it produces models with a sparser representation in terms of support vectors and which are more stable with respect to the selection of the regularization hyper-parameter.
Neurocomputing | 2018
Alberto Torres-Barrán; Carlos M. Alaíz; José R. Dorronsoro
Abstract Many important linear sparse models have at its core the Lasso problem, for which the GLMNet algorithm is often considered as the current state of the art. Recently M. Jaggi has observed that Constrained Lasso (CL) can be reduced to an SVM-like problem, for which the LIBSVM library provides very efficient algorithms. This suggests that it could also be used advantageously to solve CL. In this work we will refine Jaggi’s arguments to reduce CL as well as constrained Elastic Net to a Nearest Point Problem, which in turn can be rewritten as an appropriate ν -SVM problem solvable by LIBSVM. We will also show experimentally that the well-known LIBSVM library results in a faster convergence than GLMNet for small problems and also, if properly adapted, for larger ones. Screening is another ingredient to speed up solving Lasso. Shrinking can be seen as the simpler alternative of SVM to screening and we will discuss how it also may in some cases reduce the cost of an SVM-based CL solution.