Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ricardo Ñanculef is active.

Publication


Featured researches published by Ricardo Ñanculef.


Information Sciences | 2014

A novel Frank-Wolfe algorithm. Analysis and applications to large-scale SVM training

Ricardo Ñanculef; Emanuele Frandi; Claudio Sartori; Héctor Allende

Recently, there has been a renewed interest in the machine learning community for variants of a sparse greedy approximation procedure for concave optimization known as {the Frank-Wolfe (FW) method}. In particular, this procedure has been successfully applied to train large-scale instances of non-linear Support Vector Machines (SVMs). Specializing FW to SVM training has allowed to obtain efficient algorithms but also important theoretical results, including convergence analysis of training algorithms and new characterizations of model sparsity. In this paper, we present and analyze a novel variant of the FW method based on a new way to perform away steps, a classic strategy used to accelerate the convergence of the basic FW procedure. Our formulation and analysis is focused on a general concave maximization problem on the simplex. However, the specialization of our algorithm to quadratic forms is strongly related to some classic methods in computational geometry, namely the Gilbert and MDM algorithms. On the theoretical side, we demonstrate that the method matches the guarantees in terms of convergence rate and number of iterations obtained by using classic away steps. In particular, the method enjoys a linear rate of convergence, a result that has been recently proved for MDM on quadratic forms. On the practical side, we provide experiments on several classification datasets, and evaluate the results using statistical tests. Experiments show that our method is faster than the FW method with classic away steps, and works well even in the cases in which classic away steps slow down the algorithm. Furthermore, these improvements are obtained without sacrificing the predictive accuracy of the obtained SVM model.


Expert Systems With Applications | 2014

Efficient classification of multi-labeled text streams by clashing

Ricardo Ñanculef; Ilias N. Flaounas; Nello Cristianini

We present a method for the classification of multi-labeled text documents explicitly designed for data stream applications that require to process a virtually infinite sequence of data using constant memory and constant processing time. Our method is composed of an online procedure used to efficiently map text into a low-dimensional feature space and a partition of this space into a set of regions for which the system extracts and keeps statistics used to predict multi-label text annotations. Documents are fed into the system as a sequence of words, mapped to a region of the partition, and annotated using the statistics computed from the labeled instances colliding in the same region. This approach is referred to as clashing. We illustrate the method in real-world text data, comparing the results with those obtained using other text classifiers. In addition, we provide an analysis about the effect of the representation space dimensionality on the predictive performance of the system. Our results show that the online embedding indeed approximates the geometry of the full corpus-wise TF and TF-IDF space. The model obtains competitive F measures with respect to the most accurate methods, using significantly fewer computational resources. In addition, the method achieves a higher macro-averaged F measure than methods with similar running time. Furthermore, the system is able to learn faster than the other methods from partially labeled streams.


hybrid intelligent systems | 2009

AD-SVMs: A light extension of SVMs for multicategory classification

Ricardo Ñanculef; Carlos Concha; Héctor Allende; Diego Candel; Claudio Moraga

The margin maximization principle implemented by binary Support Vector Machines (SVMs) has been shown to be equivalent to find the hyperplane equidistant to the closest points belonging to the convex hulls that enclose each class of examples. In this paper, we propose an extension of SVMs for multicategory classification which generalizes this geometric formulation. The obtained method preserves the form and complexity of the binary case, optimizing a single convex quadratic program where each new class introduces just one additional constraint. Reduced convex hulls and non-linear kernels, used in the binary case to deal with the non-linearly separable case, can be also implemented by our algorithm to obtain additional flexibility. Experimental results in well known datasets are presented, comparing our method with two widely used multicategory SVMs extensions.


iberoamerican congress on pattern recognition | 2010

A new algorithm for training SVMs using approximate minimal enclosing balls

Emanuele Frandi; Maria Grazia Gasparo; Stefano Lodi; Ricardo Ñanculef; Claudio Sartori

It has been shown that many kernel methods can be equivalently formulated as minimal enclosing ball (MEB) problems in a certain feature space. Exploiting this reduction, efficient algorithms to scale up Support Vector Machines (SVMs) and other kernel methods have been introduced under the name of Core Vector Machines (CVMs). In this paper, we study a new algorithm to train SVMs based on an instance of the Frank-Wolfe optimization method recently proposed to approximate the solution of the MEB problem. We show that, specialized to SVM training, this algorithm can scale better than CVMs at the price of a slightly lower accuracy.


international conference on artificial neural networks | 2006

Ensemble learning with local diversity

Ricardo Ñanculef; Carlos Valle; Héctor Allende; Claudio Moraga

The concept of Diversity is now recognized as a key characteristic of successful ensembles of predictors. In this paper we investigate an algorithm to generate diversity locally in regression ensembles of neural networks, which is based on the idea of imposing a neighborhood relation over the set of learners. In this algorithm each predictor iteratively improves its state considering only information about the performance of the neighbors to generate a sort of local negative correlation. We will assess our technique on two real data sets and compare this with Negative Correlation Learning, an effective technique to get diverse ensembles. We will demonstrate that the local approach exhibits better or comparable results than this global one.


International Journal of Pattern Recognition and Artificial Intelligence | 2013

TRAINING SUPPORT VECTOR MACHINES USING FRANK–WOLFE OPTIMIZATION METHODS

Emanuele Frandi; Ricardo Ñanculef; Maria Grazia Gasparo; Stefano Lodi; Claudio Sartori

Training a support vector machine (SVM) requires the solution of a quadratic programming problem (QP) whose computational complexity becomes prohibitively expensive for large scale datasets. Traditional optimization methods cannot be directly applied in these cases, mainly due to memory restrictions. By adopting a slightly different objective function and under mild conditions on the kernel used within the model, efficient algorithms to train SVMs have been devised under the name of core vector machines (CVMs). This framework exploits the equivalence of the resulting learning problem with the task of building a minimal enclosing ball (MEB) problem in a feature space, where data is implicitly embedded by a kernel function. In this paper, we improve on the CVM approach by proposing two novel methods to build SVMs based on the Frank–Wolfe algorithm, recently revisited as a fast method to approximate the solution of a MEB problem. In contrast to CVMs, our algorithms do not require to compute the solutions of a sequence of increasingly complex QPs and are defined by using only analytic optimization steps. Experiments on a large collection of datasets show that our methods scale better than CVMs in most cases, sometimes at the price of a slightly lower accuracy. As CVMs, the proposed methods can be easily extended to machine learning problems other than binary classification. However, effective classifiers are also obtained using kernels which do not satisfy the condition required by CVMs, and thus our methods can be used for a wider set of problems.


international symposium on neural networks | 2015

A PARTAN-accelerated Frank-Wolfe algorithm for large-scale SVM classification

Emanuele Frandi; Ricardo Ñanculef; Johan A. K. Suykens

Frank-Wolfe algorithms have recently regained the attention of the Machine Learning community. Their solid theoretical properties and sparsity guarantees make them a suitable choice for a wide range of problems in this field. In addition, several variants of the basic procedure exist that improve its theoretical properties and practical performance. In this paper, we investigate the application of some of these techniques to Machine Learning, focusing in particular on a Parallel Tangent (PARTAN) variant of the FW algorithm for SVM classification, which has not been previously suggested or studied for this type of problem. We provide experiments both in a standard setting and using a stochastic speed-up technique, showing that the considered algorithms obtain promising results on several medium and large-scale benchmark datasets.


iberoamerican congress on pattern recognition | 2010

A sequential minimal optimization algorithm for the all-distances support vector machine

Diego Candel; Ricardo Ñanculef; Carlos Concha; Héctor Allende

The All-Distances SVM is a single-objective light extension of the binary µ-SVM for multi-category classification that is competitive against multi-objective SVMs, such as One-against-the-Rest SVMs and One-against-One SVMs. Although the model takes into account considerably less constraints than previous formulations, it lacks of an efficient training algorithm, making its use with medium and large problems impracticable. In this paper, a Sequential Minimal Optimization-like algorithm is proposed to train the All-Distances SVM, making large problems abordable. Experimental results with public benchmark data are presented to show the performance of the AD-SVM trained with this algorithm against other single-objective multi-category SVMs.


intelligent data engineering and automated learning | 2006

Local negative correlation with resampling

Ricardo Ñanculef; Carlos Valle; Héctor Allende; Claudio Moraga

This paper deals with a learning algorithm which combines two well known methods to generate ensemble diversity – error negative correlation and resampling. In this algorithm, a set of learners iteratively and synchronously improve their state considering information about the performance of a fixed number of other learners in the ensemble, to generate a sort of local negative correlation. Resampling allows the base algorithm to control the impact of highly influential data points which in turns can improve its generalization error. The resulting algorithm can be viewed as a generalization of bagging, where each learner no longer is independent but can be locally coupled with other learners. We will demonstrate our technique on two real data sets using neural networks ensembles.


mexican international conference on artificial intelligence | 2004

Robust Bootstrapping Neural Networks

Héctor Allende; Ricardo Ñanculef; Rodrigo Salas

Artificial neural networks (ANN) have been used as predictive systems for a variety of application domains such as science, engineering and finance. Therefore it is very important to be able to estimate the reliability of a given model. Bootstrap is a computer intensive method used for estimating the distribution of a statistical estimator based on an imitation of the probabilistic structure of the data generating process and the information contained in a given set of random observations. Bootstrap plans can be used for estimating the uncertainty associated with a value predicted by a feedforward neural network.

Collaboration


Dive into the Ricardo Ñanculef's collaboration.

Top Co-Authors

Avatar

Héctor Allende

Adolfo Ibáñez University

View shared research outputs
Top Co-Authors

Avatar

Claudio Moraga

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emanuele Frandi

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johan A. K. Suykens

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge