Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emanuele Frandi is active.

Publication


Featured researches published by Emanuele Frandi.


Information Sciences | 2014

A novel Frank-Wolfe algorithm. Analysis and applications to large-scale SVM training

Ricardo Ñanculef; Emanuele Frandi; Claudio Sartori; Héctor Allende

Recently, there has been a renewed interest in the machine learning community for variants of a sparse greedy approximation procedure for concave optimization known as {the Frank-Wolfe (FW) method}. In particular, this procedure has been successfully applied to train large-scale instances of non-linear Support Vector Machines (SVMs). Specializing FW to SVM training has allowed to obtain efficient algorithms but also important theoretical results, including convergence analysis of training algorithms and new characterizations of model sparsity. In this paper, we present and analyze a novel variant of the FW method based on a new way to perform away steps, a classic strategy used to accelerate the convergence of the basic FW procedure. Our formulation and analysis is focused on a general concave maximization problem on the simplex. However, the specialization of our algorithm to quadratic forms is strongly related to some classic methods in computational geometry, namely the Gilbert and MDM algorithms. On the theoretical side, we demonstrate that the method matches the guarantees in terms of convergence rate and number of iterations obtained by using classic away steps. In particular, the method enjoys a linear rate of convergence, a result that has been recently proved for MDM on quadratic forms. On the practical side, we provide experiments on several classification datasets, and evaluate the results using statistical tests. Experiments show that our method is faster than the FW method with classic away steps, and works well even in the cases in which classic away steps slow down the algorithm. Furthermore, these improvements are obtained without sacrificing the predictive accuracy of the obtained SVM model.


Optimization Methods & Software | 2014

Coordinate search algorithms in multilevel optimization

Emanuele Frandi; Alessandra Papini

Many optimization problems of practical interest arise from the discretization of continuous problems. Classical examples can be found in calculus of variations, optimal control and image processing. In the recent years a number of strategies have been proposed for the solution of such problems, broadly known as multilevel methods. Inspired by classical multigrid schemes for linear systems, they exploit the possibility of solving the problem on coarser discretization levels to accelerate the computation of a finest-level solution. In this paper, we study the applicability of coordinate search algorithms in a multilevel optimization paradigm. We develop a multilevel derivative-free coordinate search method, where coarse-level objective functions are defined by suitable surrogate models. We employ a recursive v-cycle correction scheme, which exhibits multigrid-like error smoothing properties. On a practical level, the algorithm is implemented in tandem with a full-multilevel initialization. A suitable strategy to manage the coordinate search stepsize on different levels is also proposed, which gives a substantial contribution to the overall speed of the algorithm. Numerical experiments on several examples show promising results. The presented algorithm can solve large problems in a reasonable time, thus overcoming size and convergence speed limitations typical of coordinate search methods.


iberoamerican congress on pattern recognition | 2010

A new algorithm for training SVMs using approximate minimal enclosing balls

Emanuele Frandi; Maria Grazia Gasparo; Stefano Lodi; Ricardo Ñanculef; Claudio Sartori

It has been shown that many kernel methods can be equivalently formulated as minimal enclosing ball (MEB) problems in a certain feature space. Exploiting this reduction, efficient algorithms to scale up Support Vector Machines (SVMs) and other kernel methods have been introduced under the name of Core Vector Machines (CVMs). In this paper, we study a new algorithm to train SVMs based on an instance of the Frank-Wolfe optimization method recently proposed to approximate the solution of the MEB problem. We show that, specialized to SVM training, this algorithm can scale better than CVMs at the price of a slightly lower accuracy.


2014 IEEE Symposium on Computational Intelligence in Big Data (CIBD) | 2014

High level high performance computing for multitask learning of time-varying models

Marco Signoretto; Emanuele Frandi; Zahra Karevan; Johan A. K. Suykens

We propose an approach suitable to learn multiple time-varying models jointly and discuss an application in data-driven weather forecasting. The methodology relies on spectral regularization and encodes the typical multi-task learning assumption that models lie near a common low dimensional subspace. The arising optimization problem amounts to estimating a matrix from noisy linear measurements within a trace norm ball. Depending on the problem, the matrix dimensions as well as the number of measurements can be large. We discuss an algorithm that can handle large-scale problems and is amenable to parallelization. We then compare high level high performance implementation strategies that rely on Just-in-Time (JIT) decorators. The approach enables, in particular, to offload computations to a GPU without hard-coding computationally intensive operations via a low-level language. As such, it allows for fast prototyping and therefore it is of general interest for developing and testing novel computational models.


International Journal of Pattern Recognition and Artificial Intelligence | 2013

TRAINING SUPPORT VECTOR MACHINES USING FRANK–WOLFE OPTIMIZATION METHODS

Emanuele Frandi; Ricardo Ñanculef; Maria Grazia Gasparo; Stefano Lodi; Claudio Sartori

Training a support vector machine (SVM) requires the solution of a quadratic programming problem (QP) whose computational complexity becomes prohibitively expensive for large scale datasets. Traditional optimization methods cannot be directly applied in these cases, mainly due to memory restrictions. By adopting a slightly different objective function and under mild conditions on the kernel used within the model, efficient algorithms to train SVMs have been devised under the name of core vector machines (CVMs). This framework exploits the equivalence of the resulting learning problem with the task of building a minimal enclosing ball (MEB) problem in a feature space, where data is implicitly embedded by a kernel function. In this paper, we improve on the CVM approach by proposing two novel methods to build SVMs based on the Frank–Wolfe algorithm, recently revisited as a fast method to approximate the solution of a MEB problem. In contrast to CVMs, our algorithms do not require to compute the solutions of a sequence of increasingly complex QPs and are defined by using only analytic optimization steps. Experiments on a large collection of datasets show that our methods scale better than CVMs in most cases, sometimes at the price of a slightly lower accuracy. As CVMs, the proposed methods can be easily extended to machine learning problems other than binary classification. However, effective classifiers are also obtained using kernels which do not satisfy the condition required by CVMs, and thus our methods can be used for a wider set of problems.


international symposium on neural networks | 2015

A PARTAN-accelerated Frank-Wolfe algorithm for large-scale SVM classification

Emanuele Frandi; Ricardo Ñanculef; Johan A. K. Suykens

Frank-Wolfe algorithms have recently regained the attention of the Machine Learning community. Their solid theoretical properties and sparsity guarantees make them a suitable choice for a wide range of problems in this field. In addition, several variants of the basic procedure exist that improve its theoretical properties and practical performance. In this paper, we investigate the application of some of these techniques to Machine Learning, focusing in particular on a Parallel Tangent (PARTAN) variant of the FW algorithm for SVM classification, which has not been previously suggested or studied for this type of problem. We provide experiments both in a standard setting and using a stochastic speed-up technique, showing that the considered algorithms obtain promising results on several medium and large-scale benchmark datasets.


Optimization Methods & Software | 2015

Improving Direct Search algorithms by multilevel optimization techniques

Emanuele Frandi; Alessandra Papini

Direct Search algorithms are classical derivative-free methods for optimization. Though endowed with solid theoretical properties, they are not well suited for large-scale problems due to slow convergence and scaling issues. In this paper, we discuss how, on problems for which a hierarchy of objective functions is available, such limitations can be circumvented by using multilevel schemes which are able to accelerate the computation of a finest level solution. Starting from a previously introduced derivative-free multilevel method, based on Coordinate Search optimization with a sampling strategy of Gauss–Seidel type, we consider also the use of sampling strategies of Jacobi type, and present several algorithmic variations. We justify our choices by performing experiments on two model problems, showing that a performance close to multigrid optimality can be observed in practice.


iberoamerican congress on pattern recognition | 2016

Efficient Sparse Approximation of Support Vector Machines Solving a Kernel Lasso

Marcelo Aliquintuy; Emanuele Frandi; Ricardo Ñanculef; Johan A. K. Suykens

Performing predictions using a non-linear support vector machine (SVM) can be too expensive in some large-scale scenarios. In the non-linear case, the complexity of storing and using the classifier is determined by the number of support vectors, which is often a significant fraction of the training data. This is a major limitation in applications where the model needs to be evaluated many times to accomplish a task, such as those arising in computer vision and web search ranking.


neural information processing systems | 2014

Complexity issues and randomization strategies in Frank-Wolfe algorithms for machine learning

Emanuele Frandi; Ricardo Ñanculef; Johan A. K. Suykens


Machine Learning | 2016

Fast and scalable Lasso via stochastic Frank–Wolfe methods with a convergence guarantee

Emanuele Frandi; Ricardo Ñanculef; Stefano Lodi; Claudio Sartori; Johan A. K. Suykens

Collaboration


Dive into the Emanuele Frandi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johan A. K. Suykens

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Héctor Allende

Adolfo Ibáñez University

View shared research outputs
Top Co-Authors

Avatar

Marco Signoretto

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zahra Karevan

Katholieke Universiteit Leuven

View shared research outputs
Researchain Logo
Decentralizing Knowledge