Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José R. Dorronsoro is active.

Publication


Featured researches published by José R. Dorronsoro.


european conference on machine learning | 2008

On the Equivalence of the SMO and MDM Algorithms for SVM Training

Jorge López; Álvaro Barbero; José R. Dorronsoro

SVM training is usually discussed under two different algorithmic points of view. The first one is provided by decomposition methods such as SMO and SVMLight while the second one encompasses geometric methods that try to solve a Nearest Point Problem (NPP), the Gilbert---Schlesinger---Kozinec (GSK) and Mitchell---Demyanov---Malozemov (MDM) algorithms being the most representative ones. In this work we will show that, indeed, both approaches are essentially coincident. More precisely, we will show that a slight modification of SMO in which at each iteration both updating multipliers correspond to patterns in the same class solves NPP and, moreover, that this modification coincides with an extended MDM algorithm. Besides this, we also propose a new way to apply the MDM algorithm for NPP problems over reduced convex hulls.


Neurocomputing | 2009

Finding optimal model parameters by deterministic and annealed focused grid search

Álvaro Barbero Jiménez; Jorge López Lázaro; José R. Dorronsoro

Optimal parameter model finding is usually a crucial task in engineering applications of classification and modelling. The exponential cost of linear search on a parameter grid of a given precision rules it out in all but the simplest problems and random algorithms such as uniform design or the covariance matrix adaptation-evolution strategy (CMA-ES) are usually applied. In this work we shall present two focused grid search (FGS) alternatives in which one repeatedly zooms into more concentrated sets of discrete grid points in the parameter search space. The first one, deterministic FGS (DFGS), is much faster than standard search although still too costly in problems with a large number of parameters. The second one, annealed FGS (AFGS), is a random version of DFGS where a fixed fraction of grid points is randomly selected and examined. As we shall numerically see over several classification problems for multilayer perceptrons and support vector machines, DFGS and AFGS are competitive with respect to CMA-ES, one of the most successful evolutive black-box optimizers. The choice of a concrete technique may thus rest in other facts, and the simplicity and basically parameter-free nature of both DFGS and AFGS may make them worthwile alternatives to the thorough theoretical and experimental background of CMA-ES.


IEEE Transactions on Neural Networks | 2012

Simple Proof of Convergence of the SMO Algorithm for Different SVM Variants

Jorge López; José R. Dorronsoro

In this brief, we give a new proof of the asymptotic convergence of the sequential minimum optimization (SMO) algorithm for both the most violating pair and second order rules to select the pair of coefficients to be updated. The proof is more self-contained, shorter, and simpler than previous ones and has a different flavor, partially building upon Gilberts original convergence proof of its algorithm to solve the minimum norm problem for convex hulls. It is valid for both support vector classification (SVC) and support vector regression, which are formulated under a general problem that encompasses them. Moreover, this general problem can be further extended to also cover other support vector machines (SVM)-related problems such as -SVC or one-class SVMs, while the convergence proof of the slight variant of SMO needed for them remains basically unchanged.


Neurocomputing | 2016

Hybrid machine learning forecasting of solar radiation values

Yvonne Gala; Ángela Fernández; Julia Díaz; José R. Dorronsoro

The constant expansion of solar energy has made the accurate forecasting of radiation an important issue. In this work we apply Support Vector Regression (SVR), Gradient Boosted Regression (GBR), Random Forest Regression (RFR) as well as a hybrid method to combine them to downscale and improve 3-h accumulated radiation forecasts provided by Numerical Weather Prediction (NWP) systems for seven locations in Spain. We use either direct 3-h aggregated radiation forecasts or we build first global accumulated daily predictions and disaggregate them into 3-h values, with both approaches outperforming the base NWP forecasts. We also show how to disaggregate the 3-h forecasts into hourly values using interpolation based on clear sky (CS) theoretical and experimental radiation models, with the disaggregated forecasts again being better than the base NWP ones and where empirical CS interpolation yields the best results. Besides providing ample background on a problem that offers many opportunities to the Machine Learning (ML) community, our study shows that ML methods or, more generally, hybrid artificial intelligence systems are quite effective and, hence, relevant for solar radiation prediction.


IEEE Transactions on Neural Networks | 1998

A nonlinear discriminant algorithm for feature extraction and data classification

C. Santa Cruz; José R. Dorronsoro

This paper presents a nonlinear supervised feature extraction algorithm that combines Fishers criterion function with a preliminary perceptron-like nonlinear projection of vectors in pattern space. Its main motivation is to combine the approximation properties of multilayer perceptrons (MLPs) with the target free nature of Fishers classical discriminant analysis. In fact, although MLPs provide good classifiers for many problems, there may be some situations, such as unequal class sizes with a high degree of pattern mixing among them, that may make difficult the construction of good MLP classifiers. In these instances, the features extracted by our procedure could be more effective. After the description of its construction and the analysis of its complexity, we will illustrate its use over a synthetic problem with the above characteristics.


international conference on artificial neural networks | 2013

Group Fused Lasso

Carlos M. Alaíz; Álvaro Barbero; José R. Dorronsoro

We introduce the Group Total Variation (GTV) regularizer, a modification of Total Variation that uses the l2,1 norm instead of the l1 one to deal with multidimensional features. When used as the only regularizer, GTV can be applied jointly with iterative convex optimization algorithms such as FISTA. This requires to compute its proximal operator which we derive using a dual formulation. GTV can also be combined with a Group Lasso (GL) regularizer, leading to what we call Group Fused Lasso (GFL) whose proximal operator can now be computed combining the GTV and GL proximals through Dykstra algorithm. We will illustrate how to apply GFL in strongly structured but ill-posed regression problems as well as the use of GTV to denoise colour images.


hybrid intelligent systems | 2007

Finding Optimal Model Parameters by Discrete Grid Search

Álvaro Barbero Jiménez; Jorge López Lázaro; José R. Dorronsoro

Finding optimal parameters for a model is usually a crucial task in engineering approaches to classification and modeling tasks. An automated approach is particularly desirable when a hybrid approach combining several distinct methods is to be used. In this work we present an algorithm for finding optimal parameters that works with no specific information about the underlying model and only requires the discretization of the parameter range to be considered. We will illustrate the procedure’s performance for multilayer perceptrons and support vector machines, obtaining competitive results with state-of-the-art procedures whose parameters have been tuned by experts. Our procedure is much more efficient than straight parameter search (and probably than other procedures that have appeared in the literature), but it may nevertheless require extensive computations to arrive at the best parameter values, a potential drawback that can be overcome in practice because of its highly parallelizable nature.


IEEE Transactions on Signal Processing | 2003

Autoassociative neural networks and noise filtering

José R. Dorronsoro; Vicente López; Carlos Santa Cruz; Juan A. Sigüenza

We introduce linear autoassociative neural (AN) network filters for the removal of additive noise from one-dimensional (1-D) time series. The AN network will have a (2M+1)/spl times/L/spl times/(2M+1) architecture, and for M fixed, we show how to choose the optimal L value and output coordinate from square error estimates between the AN filter outputs and the clean series. The frequency response of AN filters are also studied, and they are shown to act as matched band filters. A noise variance estimate is also derived from this analysis. We numerically illustrate their behavior on two examples and also compare their theoretical performance with that of optimal Wiener filters.


international work-conference on the interplay between natural and artificial computation | 2005

Boosting parallel perceptrons for label noise reduction in classification problems

Iván Cantador; José R. Dorronsoro

Boosting combines an ensemble of weak learners to construct a new weighted classifier that is often more accurate than any of its components. The construction of such learners, whose training sets depend on the performance of the previous members of the ensemble, is carried out by successively focusing on those patterns harder to classify. This fact deteriorates boostings results when dealing with malicious noise as, for instance, mislabeled training examples. In order to detect and avoid those noisy examples during the learning process, we propose the use of Parallel Perceptrons. Among other things, these novel machines allow to naturally define margins for hidden unit activations. We shall use these margins to detect which patterns may have an incorrect label and also which are safe, in the sense of being well represented in the training sample by many other similar patterns. As candidates for being noisy examples we shall reduce the weights of the former ones, and as a support for the overall detection procedure we shall augment the weights of the latter ones.


iberian conference on pattern recognition and image analysis | 2005

Parallel perceptrons, activation margins and imbalanced training set pruning

Iván Cantador; José R. Dorronsoro

A natural way to deal with training samples in imbalanced class problems is to prune them removing redundant patterns, easy to classify and probably over represented, and label noisy patterns that belonging to one class are labelled as members of another. This allows classifier construction to focus on borderline patterns, likely to be the most informative ones. To appropriately define the above subsets, in this work we will use as base classifiers the so–called parallel perceptrons, a novel approach to committee machine training that allows, among other things, to naturally define margins for hidden unit activations. We shall use these margins to define the above pattern types and to iteratively perform subsample selections in an initial training set that enhance classification accuracy and allow for a balanced classifier performance even when class sizes are greatly different.

Collaboration


Dive into the José R. Dorronsoro's collaboration.

Top Co-Authors

Avatar

Ana M. González

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Álvaro Barbero

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Jorge López

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Vicente López

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Carlos M. Alaíz

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Carlos Santa Cruz

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Jorge López Lázaro

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Alberto Torres-Barrán

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Iván Cantador

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Juan A. Sigüenza

Autonomous University of Madrid

View shared research outputs
Researchain Logo
Decentralizing Knowledge