João Paulo Pordeus Gomes
Federal University of Ceará
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by João Paulo Pordeus Gomes.
Applied Soft Computing | 2016
Alisson S. C. Alencar; Ajalmar R. da Rocha Neto; João Paulo Pordeus Gomes
Graphical abstractDisplay Omitted HighlightsWe propose a pruning method for ELM using genetic algorithms (GA).Our proposal estimates the leave-one-out (LOO) error using the PRESS statistic.Our proposal, called GAP-ELM, was tested on 7 real world datasets.GAP-ELM was compared with MLP and RBF neural networks and showed competitive results. Extreme learning machine (ELM) is a recently proposed learning algorithm for single hidden layer feedfoward neural networks (SLFN) that achieved remarkable performances in various applications. In ELM, the hidden neurons are randomly assigned and the output layer weights are learned in a single step using the Moore-Penrose generalized inverse. This approach results in a fast learning neural network algorithm with a single hyperparameter (the number of hidden neurons). Despite the aforementioned advantages, using ELM can result in models with a large number of hidden neurons and this can lead to poor generalization. To overcome this drawback, we propose a novel method to prune hidden layer neurons based on genetic algorithms (GA). The proposed approach, referred as GAP-ELM, selects subset of the hidden neurons to optimize a multiobjective fitness function that defines a compromise between accuracy and the number of pruned neurons. The performance of GAP-ELM is assessed on several real world datasets and compared to other SLFN and a well known pruning method called Optimally Pruned ELM (OP-ELM). On the basis of our experiments, we can state that GAP-ELM is a valid alternative for classification tasks.
international work-conference on artificial and natural neural networks | 2015
Diego Parente Paiva Mesquita; João Paulo Pordeus Gomes; Amauri Holanda de Souza Júnior
The use of ensemble methods for pattern classification have gained attention in recent years mainly due to its improvements on classification rates. This paper evaluates ensemble learning methods using the Minimal Learning Machines (MLM), a recently proposed supervised learning algorithm. Additionally, we introduce an alternative output estimation procedure to reduce the complexity of the standard MLM. The proposed methods are evaluated on real datasets and compared to several state-of-the-art classification algorithms.
ieee aerospace conference | 2010
Bruno Paes Leao; João Paulo Pordeus Gomes; Roberto Kawakami Harrop Galvão; Takashi Yoneyama
This paper describes a novel approach for evaluating the performance of failure prognostics solutions in the context of Prognostics and Health Management (PHM). The method is based on the use of probability integral transforms. It provides an easy way to compare different prognostics algorithms based on information from the actual times of observed failure events and the corresponding estimated probability densities resulting from the prognostics algorithms. The resulting information can be translated into an informative graphical form or a numerical index to compare the different solutions. The usefulness of the proposed methodology is illustrated with a sample application.
ieee aerospace conference | 2010
Leonardo Ramos Rodrigues; João Paulo Pordeus Gomes; Cintia de Oliveira Bizarria; Roberto Kawakami Harrop Galvão; Takashi Yoneyama
Aircraft operators, manufacturers and maintenance teams have shown growing interest in prognostic systems to maximize equipment useful life and minimize operational and maintenance costs during aircraft useful life. Cost-benefit models are the key to demonstrate the value of prognostics and health monitoring (PHM) technology. In cost-benefit simulation, the decision making process of choosing one maintenance opportunity or another shall consider the risks and costs associated with each possible choice. If probability density functions (PDF) associated with a prognostic system forecast are available, they must also be taken into account. Although many cost-benefit models have been proposed, cost information has not been used in the maintenance opportunity choice. The purpose of this work is to explore potential benefits of decision analysis techniques in choosing the best maintenance opportunity when PHM information is available. A simple example is presented when the maintenance opportunity must be chosen based on a cost avoidance criterion.
ieee aerospace conference | 2009
Bruno Paes Leao; João Paulo Pordeus Gomes; Roberto Kawakami Harrop Galvão; Takashi Yoneyama
Statistical Process Control (SPC) techniques have been successfully applied to a great variety of industrial processes. This paper presents the use of these techniques for the monitoring of the health state of aircraft flap and slat systems. Different approaches of univariate SPC and multivariate SPC (MSPC) are compared. The MSPC methods considered are Hotellings T2 and Runger U2. Results of the application of the methodology to simulated aircraft data are presented for illustration and validation of the proposed health monitoring techniques.
IEEE Transactions on Reliability | 2016
João Paulo Pordeus Gomes; Roberto Kawakami Harrop Galvão; Takashi Yoneyama; Bruno Paes Leao
This paper presents a new method for combining measured parameters into a single indicator for monitoring the condition of systems subjected to degradation effects. The proposed approach integrates the use of nonparametric density estimation techniques into Rungers U2 method, which allows for the separation of variables that are directly related to degradation effects from those which are not. Two simulated case studies are presented for illustration, namely, the monitoring of a flap extension and retraction system, and a gas turbine employed as an auxiliary power unit. For comparison, degradation indicators are also calculated by using Hottelings T2 and Rungers U2 methods, as well as a nonparametric method without separation of variables. In both case studies, the proposed method provided the best results in terms of fault detection performance and suitability for remaining useful life prediction.
brazilian conference on intelligent systems | 2015
Alisson S. C. Alencar; Weslley L. Caldas; João Paulo Pordeus Gomes; Amauri H. de Souza; Paulo A. C. Aguilar; Cristiano Rodrigues; Wellington Franco; Miguel Franklin de Castro; Rossana M. C. Andrade
Ranking is an important task in information retrieval and has gained much attention in recent years. Among the most used strategies, machine learning has achieved important results. The current work proposes a new machine learning based ranking algorithm, the MLM-RANK. MLM-RANK is based on the recently proposed Minimal Learning Machine (MLM). MLM is a supervised learning method that requires the adjustment of a single hyper parameter. The proposed method was evaluated against Prank and ELM Rank, both state of the art point wise ranking methods. In these tests MLM-RANK achieved promising results.
Applied Soft Computing | 2017
Diego Parente Paiva Mesquita; João Paulo Pordeus Gomes; Leonardo Ramos Rodrigues; Saulo A. F. Oliveira; Roberto Kawakami Harrop Galvão
Abstract Randomization based methods for training neural networks have gained increasing attention in recent years and achieved remarkable performances on a wide variety of tasks. The interest in such methods relies on the fact that standard gradient based learning algorithms may often converge to local minima and are usually time consuming. Despite the good performance achieved by Randomization Based Neural Networks (RNNs), the random feature mapping procedure may generate redundant information, leading to suboptimal solutions. To overcome this problem, some strategies have been used such as feature selection, hidden neuron pruning and ensemble methods. Feature selection methods discard redundant information from the original dataset. Pruning methods eliminate hidden nodes with redundant information. Ensemble methods combine multiple models to generate a single one. Selective ensemble methods select a subset of all available models to generate the final model. In this paper, we propose a selective ensemble of RNNs based on the Successive Projections Algorithm (SPA), for regression problems. The proposed method, named Selective Ensemble of RNNs using the Successive projections algorithm (SERS), employs the SPA for three distinct tasks: feature selection, pruning and ensemble selection. SPA was originally developed as a feature selection technique and has been recently employed for RNN pruning. Herein, we show that it can also be employed for ensemble selection. The proposed framework was used to develop three selective ensemble models based on the three RNNs: Extreme Learning Machines (ELM), Feedforward Neural Network with Random Weights (FNNRW) and Random Vector Functional Link (RVFL). The performances of SERS-ELM, SERS-FNNRW and SERS-RVFL were assessed in terms of model accuracy and model complexity in several real world benchmark problems. Comparisons to related methods showed that SERS variants achieved similar accuracies with significant model complexity reduction. Among the proposed models, SERS-RVFL had the best accuracies and all variants had similar model complexities.
international conference on neural information processing | 2015
Diego Parente Paiva Mesquita; João Paulo Pordeus Gomes; Amauri H. Souza
Minimal Learning Machine (MLM) is a recently proposed supervised learning algorithm with simple implementation and few hyper-parameters. Learning MLM model consists on building a linear mapping between input and output distance matrices. In this work, the standard MLM is modified to deal with missing data. For that, the expected squared distance approach is used to compute the input space distance matrix. The proposed approach showed promising results when compared to standard strategies that deal with missing data.
Neural Processing Letters | 2017
Diego Parente Paiva Mesquita; João Paulo Pordeus Gomes; Amauri Holanda de Souza Júnior
Minimal Learning Machine (MLM) is a recently proposed supervised learning algorithm with performance comparable to most state-of-the-art machine learning methods. In this work, we propose ensemble methods for classification and regression using MLMs. The goal of ensemble strategies is to produce more robust and accurate models when compared to a single classifier or regression model. Despite its successful application, MLM employs a computationally intensive optimization problem as part of its test procedure (out-of-sample data estimation). This becomes even more noticeable in the context of ensemble learning, where multiple models are used. Aiming to provide fast alternatives to the standard MLM, we also propose the Nearest Neighbor Minimal Learning Machine and the Cubic Equation Minimal Learning Machine to cope with classification and single-output regression problems, respectively. The experimental assessment conducted on real-world datasets reports that ensemble of fast MLMs perform comparably or superiorly to reference machine learning algorithms.