Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where André L. D. Rossi is active.

Publication


Featured researches published by André L. D. Rossi.


Neurocomputing | 2012

Combining meta-learning and search techniques to select parameters for support vector machines

Taciana A. F. Gomes; Ricardo Bastos Cavalcante Prudêncio; Carlos Soares; André L. D. Rossi; André Carlos Ponce Leon Ferreira de Carvalho

Support Vector Machines (SVMs) have achieved very good performance on different learning problems. However, the success of SVMs depends on the adequate choice of the values of a number of parameters (e.g., the kernel and regularization parameters). In the current work, we propose the combination of meta-learning and search algorithms to deal with the problem of SVM parameter selection. In this combination, given a new problem to be solved, meta-learning is employed to recommend SVM parameter values based on parameter configurations that have been successfully adopted in previous similar problems. The parameter values returned by meta-learning are then used as initial search points by a search technique, which will further explore the parameter space. In this proposal, we envisioned that the initial solutions provided by meta-learning are located in good regions of the search space (i.e. they are closer to optimum solutions). Hence, the search algorithm would need to evaluate a lower number of candidate solutions when looking for an adequate solution. In this work, we investigate the combination of meta-learning with two search algorithms: Particle Swarm Optimization and Tabu Search. The implemented hybrid algorithms were used to select the values of two SVM parameters in the regression domain. These combinations were compared with the use of the search algorithms without meta-learning. The experimental results on a set of 40 regression problems showed that, on average, the proposed hybrid methods obtained lower error rates when compared to their components applied in isolation.


Neurocomputing | 2014

MetaStream: A meta-learning based method for periodic algorithm selection in time-changing data

André L. D. Rossi; André Carlos Ponce Leon Ferreira de Carvalho; Carlos Soares; Bruno Feres de Souza

Dynamic real-world applications that generate data continuously have introduced new challenges for the machine learning community, since the concepts to be learned are likely to change over time. In such scenarios, an appropriate model at a time point may rapidly become obsolete, requiring updating or replacement. As there are several learning algorithms available, choosing one whose bias suits the current data best is not a trivial task. In this paper, we present a meta-learning based method for periodic algorithm selection in time-changing environments, named MetaStream. It works by mapping the characteristics extracted from the past and incoming data to the performance of regression models in order to choose between single learning algorithms or their combination. Experimental results for two real regression problems showed that MetaStream is able to improve the general performance of the learning system compared to a baseline method and an ensemble-based approach.


brazilian symposium on neural networks | 2008

Bio-inspired Optimization Techniques for SVM Parameter Tuning

André L. D. Rossi; A. de Carvalho

Machine learning techniques have been successfully applied to a large number of classification problems. Among these techniques, support vector machines (SVMs) are well know for the good classification accuracies reported in several studies. However, like many machine learning techniques, the classification performance obtained by SVMs is influenced by the choice of proper values for their free parameters. In this paper, we investigate what is the influence of different optimization techniques inspired by biology when they are used to optimize the free parameters of SVMs. This comparative study also included the default values suggested in the literature for the free parameters and a grid algorithm used for parameter tuning. The results obtained suggest that, although SVMs work well with the default values, they can benefit from the use of an optimization technique for parameter tuning.


brazilian symposium on neural networks | 2010

Combining Meta-learning and Search Techniques to SVM Parameter Selection

Taciana A. F. Gomes; Ricardo Bastos Cavalcante Prudêncio; Carlos Soares; André L. D. Rossi; André Carlos Ponce Leon Ferreira de Carvalho

Support Vector Machines (SVMs) have achieved very good performance on different learning problems. However, the success of SVMs depends on the adequate choice of a number of parameters, including for instance the kernel and the regularization parameters. In the current work, we propose the combination of Meta-Learning and search techniques to the problem of SVM parameter selection. Given an input problem, Meta-Learning is used to recommend SVM parameters based on well-succeeded parameters adopted in previous similar problems. The parameters returned by Meta-Learning are then used as initial search points to a search technique which will perform a further exploration of the parameter space. In this combination, we envisioned that the initial solutions provided by Meta-Learning are located in good regions in the search space (i.e. they are closer to the optimum solutions). Hence, the search technique would need to evaluate a lower number of candidate search points in order to find an adequate solution. In our work, we implemented a prototype in which Particle Swarm Optimization (PSO) was used to select the values of two SVM parameters for regression problems. In the performed experiments, the proposed solution was compared to a PSO with random initialization, obtaining better average results on a set of 40 regression problems.


brazilian symposium on neural networks | 2012

Meta-Learning for Periodic Algorithm Selection in Time-Changing Data

André L. D. Rossi; André Carlos Ponce Leon Ferreira de Carvalho; Carlos Soares

When users have to choose a learning algorithm to induce a model for a given dataset, a common practice is to select an algorithm whose bias suits the data distribution. In real-world applications that produce data continuously this distribution may change over time. Thus, a learning algorithm with the adequate bias for a dataset may become unsuitable for new data following a different distribution. In this paper we present a meta-learning approach for periodic algorithm selection when data distribution may change over time. This approach exploits the knowledge obtained from the induction of models for different data chunks to improve the general predictive performance. It periodically applies a meta-classifier to predict the most appropriate learning algorithm for new unlabeled data. Characteristics extracted from past and incoming data, together with the predictive performance from different models, constitute the meta-data, which is used to induce this meta-classifier. Experimental results using data of a travel time prediction problem show its ability to improve the general performance of the learning system. The proposed approach can be applied to other time-changing tasks, since it is domain independent.


world congress on information and communication technologies | 2011

Predicting execution time of machine learning tasks using metalearning

Rattan Priya; Bruno Feres de Souza; André L. D. Rossi; André Carlos Ponce Leon Ferreira de Carvalho

Lately, many academic and industrial fields have shifted research focus from data acquisition to data analysis. This transition has been facilitated by the usage of Machine Learning (ML) techniques to automatically identify patterns and extract non-trivial knowledge from data. The experimental procedures associated with that are usually complex and computationally demanding. Scheduling is a typical method used to decide how to allocate tasks into available resources. An important step for such is to guess how long an application would take to execute. In this paper, we introduce an approach for predicting processing time specifically of ML tasks. It employs a metalearning framework to relate characteristics of datasets and current machine state to actual execution time. An empirical study was conducted using 78 publicly available datasets, 6 ML algorithms and 4 meta-regressors. Experimental results show that our approach outperforms a commonly used baseline method. Statistical tests advise using SVMr as meta-regressor. These achievements indicate the potential of metalearning to tackle the problem and encourage further developments.


hybrid intelligent systems | 2013

Predicting execution time of machine learning tasks for scheduling

Rattan Priya; Bruno Feres de Souza; André L. D. Rossi; André Carlos Ponce Leon Ferreira de Carvalho

Lately, many academic and industrial fields have shifted their research focus from data acquisition to data analysis. This transition has been facilitated by the usage of Machine Learning ML techniques to automatically identify patterns and extract non-trivial knowledge from data. The experimental procedures associated with that are usually complex and computationally demanding. To deal with such scenario, Distributed Heterogeneous Computing DHC systems can be employed. In order to fully benefit from DHT facilities, a suitabble scheduling policy should be applied to decide how to allocate tasks into the available resources. An important step for such is to guess how long an application would take to execute. In this paper, we present an approach for predicting execution time specifically of ML tasks. It employs a metalearning framework to relate characteristics of datasets and current machine state to actual execution time. An empirical study was conducted using 78 publicly available datasets, 6 ML algorithms and 4 meta-regressors. Experimental results show that our approach outperforms a commonly used baseline method. After establishing SVM as the most promising meta-regressor, we employed its predictions to actually build schedule plans. In a simulation considering a small scale DHC enviroment, a simple Genetic Algorithm based scheduler was employed for task allocation, leading to minimized overall completion time. These achievements indicate the potential of meta-learning to tackle the problem and encourage further developments.


hybrid artificial intelligence systems | 2012

Using genetic algorithms to improve prediction of execution times of ML tasks

Rattan Priya; Bruno Feres de Souza; André L. D. Rossi; André Carlos Ponce Leon Ferreira de Carvalho

Experimental procedures associated with Machine Learning (ML) techniques are usually computationally demanding. An important step for a conscientious allocation of ML tasks into resources is predicting their execution times. Previously, empirical comparisons using a Meta-learning framework indicated that Support Vector Machines (SVM) are suited for this problem; however, their performance is affected by the choice of parameter values and input features. In this paper, we tackle the issue by applying Genetic Algorithm (GA) to perform joint Feature Subset Selection (FSS) and Parameters Optimization (PO). At first, a GA is used for FSS+PO in SVMs with two kernel functions, independently. Later, besides FSS+PO an additional term is evolved to weight predictions of both models to build a combined regressor. An empirical investigation conducted for predicting execution times of 6 ML algorithms over 78 publicly available datasets unveils a higher accuracy when compared with the previous results.


international conference on neural information processing | 2009

Bioinspired Parameter Tuning of MLP Networks for Gene Expression Analysis: Quality of Fitness Estimates vs. Number of Solutions Analysed

André L. D. Rossi; Carlos Soares; André Carlos Ponce Leon Ferreira de Carvalho

The values selected for the free parameters of Artificial Neural Networks usually have a high impact on their performance. As a result, several works investigate the use of optimization techniques, mainly metaheuristics, for the selection of values related to the network architecture, like number of hidden neurons, number of hidden layers, activation function, and to the learning algorithm, like learning rate, momentum coefficient, etc. A large number of these works use Genetic Algorithms for parameter optimization. Lately, other bioinspired optimization techniques, like Ant Colony optimization, Particle Swarm Optimization, among others, have been successfully used. Although bioinspired optimization techniques have been successfully adopted to tune neural networks parameter values, little is known about the relation between the quality of the estimates of the fitness of a solution used during the search process and the quality of the solution obtained by the optimization method. In this paper, we describe an empirical study on this issue. To focus our analysis, we restricted the datasets to the domain of gene expression analysis. Our results indicate that, although the computational power saved by using simpler estimation methods can be used to increase the number of solutions tested in the search process, the use of accurate estimates to guide that search is the most important factor to obtain good solutions.


international conference hybrid intelligent systems | 2008

Bio-Inspired Parameter Tunning of MLP Networks for Gene Expression Analysis

André L. D. Rossi; André Carlos Ponce Leon Ferreira de Carvalho; Carlos Soares

The performance of artificial neural networks is largely influenced by the value of their parameters. Among these free parameters, one can mention those related with the network architecture, e.g., number of hidden neurons, number of hidden layers, activation function, and those associated with a learning algorithm, e.g., learning rate. Optimization techniques, often genetic algorithms, have been used to tune neural networks parameter values. Lately, other techniques inspired in Biology have been investigated. In this paper, we compare the influence of different bio-inspired optimization techniques on the accuracy obtained by the networks in the domain of gene expression analysis. The experimental results show the potential of use this techniques for parameter tuning of neural networks.

Collaboration


Dive into the André L. D. Rossi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rattan Priya

Indira Gandhi Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Taciana A. F. Gomes

Federal University of Pernambuco

View shared research outputs
Top Co-Authors

Avatar

A. de Carvalho

University of São Paulo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Márcio P. Basgalupp

Federal University of São Paulo

View shared research outputs
Researchain Logo
Decentralizing Knowledge