Benjamin Rodrigues de Menezes
Universidade Federal de Minas Gerais
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Benjamin Rodrigues de Menezes.
Neurocomputing | 2007
Marcelo Azevedo Costa; Antônio de Pádua Braga; Benjamin Rodrigues de Menezes
A variation of the well-known Levenberg-Marquardt for training neural networks is proposed in this work. The algorithm presented restricts the norm of the weights vector to a preestablished norm value and finds the minimum error solution for that norm value. The norm constrain controls the neural networks degree of freedom. The more the norm increases, the more flexible is the neural model. Therefore, more fitted to the training set. A range of different norm solutions is generated and the best generalization solution is selected according to the validation set error. The results show the efficiency of the algorithm in terms of generalization performance.
Neurocomputing | 2003
Marcelo Azevedo Costa; Antônio de Pádua Braga; Benjamin Rodrigues de Menezes; Roselito de Albuquerque Teixeira; Gustavo Guimarães Parma
Abstract This paper presents a new sliding mode control algorithm that is able to guide the trajectory of a multi-layer perceptron within the plane formed by the two objective functions: training set error and norm of the weight vectors. The results show that the neural networks obtained are able to generate an approximation to the Pareto set, from which an improved generalization performance model is selected.
Applied Soft Computing | 2011
C. A. Laurentys; Carlos H. de M. Bomfim; Benjamin Rodrigues de Menezes; Walmir M. Caminhas
Pipeline leakage is a demand from governmental and environmental associations that companies need to comply with. Due the high accuracy on detecting leakage, it is necessary to set procedures that will achieve the leading performance. This paper describes a methodology to set instrumentations systems to accomplish with the legal requirement keeping high reliability during normal and fail operations conditions. To achieving the described state this paper proposes a set of models acting as Expert systems: each one observing and diagnosing pipeline leakage in real-time. The proposed system also validates the operations according the business rules applied to it. A set of techniques is applied in order to be possible the system executes its function: fuzzy logic, neural network, genetic algorithm and statistic analysis. The application of the methodology proposed is in operation supervising pipeline in a Brazilian petroleum installation.
Neurocomputing | 2007
A. Nied; Seleme I. Seleme; Gustavo Guimarães Parma; Benjamin Rodrigues de Menezes
This paper presents a new algorithm for on-line artificial neural networks (ANN) training. The network topology is a standard multilayer perceptron (MLP) and the training algorithm is based on the theory of variable structure systems (VSS) and sliding mode control (SMC). The main feature of this novel procedure is the adaptability of the gain (learning rate), which is obtained from sliding mode surface so that system stability is guaranteed.
International Journal of Neural Systems | 1999
Gustavo Guimarães Parma; Benjamin Rodrigues de Menezes; Antônio de Pádua Braga
Based on the classical backpropagation weight update equations, sliding mode control theory is introduced as a technique to adapt weights of a multi-layer perceptron. As will be demonstrated, the introduction of sliding mode has resulted in a much faster version of the standard backpropagation. The results show also that the proposed algorithm presents some important features of sliding mode control, which are robustness and high speed of learning. In addition to that, this paper shows also how control theory can be applied to train neural networks.
Neural Networks | 2012
Marcelo Azevedo Costa; Antônio de Pádua Braga; Benjamin Rodrigues de Menezes
The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function.
brazilian symposium on neural networks | 1998
Gustavo Guimarães Parma; Benjamin Rodrigues de Menezes; Antônio de Pádua Braga
Sliding mode control is applied as a procedure to adapt weights of a multilayer perceptron. Standard backpropagation weight update equations are used for providing error estimates for the output and hidden layers, similarly to the classical algorithm. The sliding mode procedures are then introduced to adapt weights taking into consideration the standard backpropagation errors. As demonstrated throughout this paper, the introduction of sliding mode has resulted in a much faster version of the standard backpropagation. The speed-up achieved is around two times the standard version.
international symposium on neural networks | 1999
Gustavo Guimarães Parma; Benjamin Rodrigues de Menezes; Antônio de Pádua Braga
This paper shows two different methodologies, both based on sliding mode control to train multilayer perceptron. These two methods are compared with standard back propagation, momentum and RPROP algorithms. The results show that the use of this control theory can reduce the time to train multilayer perceptron and also provide an interesting tool to analyze the limits for the parameters involved in the algorithm.
Journal of The Franklin Institute-engineering and Applied Mathematics | 2017
Thiago Nakamura; Reinaldo M. Palhares; Walmir M. Caminhas; Benjamin Rodrigues de Menezes; Mário César Mello de Massa Campos; Ubirajara Fumega; Carlos H. de M. Bomfim; André Paim Lemos
Abstract After a great advance by the industry on processes automation, an important challenge still remains: the automation under abnormal situations. The first step towards solving this challenge is the Fault Detection and Diagnosis (FDD). This work proposes a batch-incremental adaptive methodology for fault detection and diagnosis based on mixture models trained on a distributed computing environment. The models used are from a family of Parsimonious Gaussian Mixture Models (PGMM), in which the reduced number of parameters of the model brings important advantages when there are few data available, an expected scenario of faulty conditions. On the other hand, a large number of different models rises another challenge, the best model selection for a given behaviour. For that, it is proposed to train a large number of models, using distributed computing techniques, for only then select the best model. This work proposes the usage of the Spark framework, ideal for iterative computations. The proposed methodology was validated in a simulated process, the Tennessee Eastman Process (TEP), showing good results for both the detection and the diagnosis of faults. Furthermore, numeric experiments show the viability of training a large number of models for the best model selection a posteriori.
International Journal of Neural Systems | 1999
Regis Pinheiro Landim; Benjamin Rodrigues de Menezes; Selênio R. Silva; Walmir M. Caminhas
This work presents a Neo-Fuzzy-Neuron algorithm for the identification of nonlinear dynamic systems at the point of view of a rotor flux observer. The algorithm training is on-line, has low computational cost, does not require previous training and its convergence in one step is proved. The gradient descent method is used for its weights adjustment. Simulation and experimental results demonstrate the effectiveness of the algorithm for flux observer of induction motor drive system.