Teresa Bernarda Ludermir
Federal University of Pernambuco
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Teresa Bernarda Ludermir.
IEEE Transactions on Neural Networks | 2006
Teresa Bernarda Ludermir; Akio Yamazaki; Cleber Zanchettin
This paper introduces a methodology for neural network global optimization. The aim is the simultaneous optimization of multilayer perceptron (MLP) network weights and architectures, in order to generate topologies with few connections and high classification performance for any data sets. The approach combines the advantages of simulated annealing, tabu search and the backpropagation training algorithm in order to generate an automatic process for producing networks with high classification performance and low complexity. Experimental results obtained with four classification problems and one prediction problem has shown to be better than those obtained by the most commonly used optimization techniques
Neurocomputing | 2008
André Luis Santiago Maia; Francisco de A. T. de Carvalho; Teresa Bernarda Ludermir
This paper presents approaches to interval-valued time series forecasting. The first and second approaches are based on the autoregressive (AR) and autoregressive integrated moving average (ARIMA) models, respectively. The third approach is based on an artificial neural network (ANN) model and the last is based on a hybrid methodology that combines both ARIMA and ANN models. Each approach fits, respectively, two models on the mid-point and range of the interval values assumed by the interval-valued time series in the learning set. The forecasting of the lower and upper bounds of the interval value of the time series is accomplished through a combination of forecasts from the mid-point and range of the interval values. The evaluation of the models presented is based on the estimation of the average behavior of the mean absolute error and mean squared error in the framework of a Monte Carlo experiment. The results demonstrate that the approaches are useful in forecasting alternatives for interval-valued time series and indicate that the hybrid model is an effective way to improve the forecasting accuracy achieved by any one of the models separately.
Neurocomputing | 2004
Ricardo Bastos Cavalcante Prudêncio; Teresa Bernarda Ludermir
We present here an original work that applies meta-learning approaches to select models for time-series forecasting. In our work, we investigated two meta-learning approaches, each one used in a different case study. Initially, we used a single machine learning algorithm to select among two models to forecast stationary time series (case study I). Following, we used the NOEMON approach, a more recent work in the meta-learning area, to rank three models used to forecast time series of the M3-Competition (case study II). The experiments performed in both case studies revealed encouraging results.
Neurocomputing | 2010
Leandro M. Almeida; Teresa Bernarda Ludermir
The use of artificial neural networks implies considerable time spent choosing a set of parameters that contribute toward improving the final performance. Initial weights, the amount of hidden nodes and layers, training algorithm rates and transfer functions are normally selected through a manual process of trial-and-error that often fails to find the best possible set of neural network parameters for a specific problem. This paper proposes an automatic search methodology for the optimization of the parameters and performance of neural networks relying on use of Evolution Strategies, Particle Swarm Optimization and concepts from Genetic Algorithms corresponding to the hybrid and global search module. There is also a module that refers to local searches, including the well-known Multilayer Perceptrons, Back-propagation and the Levenberg-Marquardt training algorithms. The methodology proposed here performs the search using the aforementioned parameters in an attempt to optimize the networks and performance. Experiments were performed and the results proved the proposed method to be better than trial-and-error and other methods found in the literature.
international symposium on neural networks | 2008
M.C.P. de Souto; Ricardo Bastos Cavalcante Prudêncio; Rodrigo G. F. Soares; D.S.A. de Araujo; Ivan G. Costa; Teresa Bernarda Ludermir; Alexander Schliep
We present a novel framework that applies a meta-learning approach to clustering algorithms. Given a dataset, our meta-learning approach provides a ranking for the candidate algorithms that could be used with that dataset. This ranking could, among other things, support non-expert users in the algorithm selection task. In order to evaluate the framework proposed, we implement a prototype that employs regression support vector machines as the meta-learner. Our case study is developed in the context of cancer gene expression micro-array datasets.
Neural Computing and Applications | 2011
Gecynalda Soares da Silva Gomes; Teresa Bernarda Ludermir; Leyla M. M. R. Lima
In artificial neural networks (ANNs), the activation function most used in practice are the logistic sigmoid function and the hyperbolic tangent function. The activation functions used in ANNs have been said to play an important role in the convergence of the learning algorithms. In this paper, we evaluate the use of different activation functions and suggest the use of three new simple functions, complementary log-log, probit and log-log, as activation functions in order to improve the performance of neural networks. Financial time series were used to evaluate the performance of ANNs models using these new activation functions and to compare their performance with some activation functions existing in the literature. This evaluation is performed through two learning algorithms: conjugate gradient backpropagation with Fletcher–Reeves updates and Levenberg–Marquardt.
international conference hybrid intelligent systems | 2007
Marcio Carvalho; Teresa Bernarda Ludermir
The optimization of architecture and weights of feed forward neural networks is a complex task of great importance in problems of supervised learning. In this work we analyze the use of the particle swarm optimization algorithm for the optimization of neural network architectures and weights aiming better generalization performances through the creation of a compromise between low architectural complexity and low training errors. For evaluating these algorithms we apply them to benchmark classification problems of the medical field. The results showed that a PSO-PSO based approach represents a valid alternative to optimize weights and architectures of MLP neural networks.
Pattern Recognition Letters | 2004
Ricardo Bastos Cavalcante Prudêncio; Teresa Bernarda Ludermir; Francisco de A. T. de Carvalho
The selection of a good model for forecasting a time series is a task that involves experience and knowledge. Employing machine learning algorithms is a promising approach to acquiring knowledge in regards to this task. A supervised classification method originating from the symbolic data analysis field is proposed for the model selection problem. This method was applied in the task of selecting between two widespread models, and compared to other learning algorithms. To date, it has obtained the lowest classification errors among all the tested algorithms.
congress on evolutionary computation | 2011
Danielle N. G. Silva; Luciano D. S. Pacifico; Teresa Bernarda Ludermir
Extreme learning machine (ELM) was proposed as a new class of learning algorithm for single-hidden layer feedforward neural network (SLFN) much faster than the traditional gradient-based learning strategies. However, ELM random determination of the input weights and hidden biases may lead to non-optimal performance, and it might suffer from the overfitting as the learning model will approximate all training samples well. In this paper, a hybrid approach is proposed based on Group Search Optimizer (GSO) strategy to select input weights and hidden biases for ELM algorithm, called GSO-ELM. In addition, we evaluate the influence of different forms of handling members that fly out of the search space bounds. Experimental results show that GSO-ELM approach using different forms of dealing with out-bounded members is able to achieve better generalization performance than traditional ELM in real benchmark datasets.
Neurocomputing | 2012
Adenilton J. da Silva; Wilson Rosa de Oliveira; Teresa Bernarda Ludermir
A supervised learning algorithm for quantum neural networks (QNN) based on a novel quantum neuron node implemented as a very simple quantum circuit is proposed and investigated. In contrast to the QNN published in the literature, the proposed model can perform both quantum learning and simulate the classical models. This is partly due to the neural model used elsewhere which has weights and non-linear activations functions. Here a quantum weightless neural network model is proposed as a quantisation of the classical weightless neural networks (WNN). The theoretical and practical results on WNN can be inherited by these quantum weightless neural networks (qWNN). In the quantum learning algorithm proposed here patterns of the training set are presented concurrently in superposition. This superposition-based learning algorithm (SLA) has computational cost polynomial on the number of patterns in the training set.
Collaboration
Dive into the Teresa Bernarda Ludermir's collaboration.
Marcílio Carlos Pereira de Souto
Federal University of Rio Grande do Norte
View shared research outputs