Cristiano Cabrita
University of the Algarve
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cristiano Cabrita.
International Journal of Systems Science | 2002
A. E. Ruano; Cristiano Cabrita; José Valente de Oliveira; László T. Kóczy
Complete supervised training algorithms for B-Spline neural networks and fuzzy rule-based systems are discussed. By introducing the relationships between B-Spline neural networks and Mamdani (satisfying certain assumptions) and Takagi-Kang-Sugeno fuzzy models, training algorithms developed initially for neural networks can be adapted to fuzzy systems. The standard training criterion is reformulated, by separating its linear and nonlinear parameters. By employing this reformulated criterion with the Levenberg-Marquardt algorithm, a new training method, offering a fast rate of convergence is obtained. It is also shown that the standard Error-Back Propagation algorithm, the most common training method for this class of systems, exhibits a very poor and unreliable performance.
ieee international conference on fuzzy systems | 2004
János Botzheim; Cristiano Cabrita; László T. Kóczy; A. E. Ruano
In previous papers from the authors fuzzy model identification methods were discussed. The bacterial algorithm for extracting fuzzy rule base from a training set was presented. The Levenberg-Marquardt algorithm was also proposed for determining membership functions in fuzzy systems. In this paper the Levenberg-Marquardt technique is improved to optimise the membership functions in the fuzzy rules without Ruspini-partition. The class of membership functions investigated is the trapezoidal one as it is general enough and widely used. The method can be easily extended to arbitrary piecewise linear functions as well.
Journal of Advanced Computational Intelligence and Intelligent Informatics | 2007
János Botzheim; Cristiano Cabrita; László T. Kóczy; A. E. Ruano
The design phase of B-spline neural networks is a highly computationally complex task. Existent heuristics have been found to be highly dependent on the initial conditions employed. Increasing interest in biologically inspired learning algorithms for control techniques such as Artificial Neural Networks and Fuzzy Systems is in progress. In this paper, the Bacterial Programming approach is presented, which is based on the replication of the microbial evolution phenomenon. This technique produces an efficient topology search, obtaining additionally more consistent solutions.
world automation congress | 2006
Cristiano Cabrita; János Botzheim; Tamas Gedeon; A. E. Ruano; László T. Kóczy; Carlos M. Fonseca
In our previous works model identification methods were discussed. The bacterial evolutionary algorithm for extracting a fuzzy rule base from a training set was presented. The Levenberg-Marquardt method was also proposed for determining membership functions in fuzzy systems. The combination of evolutionary and gradient-based learning techniques -the bacterial memetic algorithm -was also introduced. In this paper an improvement of the bacterial memetic algorithm is shown for fuzzy rule extraction. The new method can optimize not only the rules, but can also find the optimal size of the rule base.
international symposium on neural networks | 2004
Cristiano Cabrita; János Botzheim; A. E. Ruano; László T. Kóczy
The design phase of B-spline neural networks represents a very high computational task. For this purpose, heuristics have been developed, but have been shown to be dependent on the initial conditions employed. In This work a new technique, bacterial programming, is proposed, whose principles are based on the replication of the microbial evolution phenomenon. The performance of this approach is illustrated and compared with existing alternatives.
IFAC Proceedings Volumes | 2002
A. E. Ruano; Pedro M. Ferreira; Cristiano Cabrita; S. Matos
Abstract Neural and neuro-fuzzy models are powerful nonlinear modelling tools. Different structures, with different properties, are widely used to capture static or dynamical nonlinear mappings. Static (non-recurrent) models share a common structure: a nonlinear stage, followed by a linear mapping. In this paper, the separability of linear and nonlinear parameters is exploited for completely supervised training algorithms. Examples of this unified view are presented, involving multilayer perceptrons, radial basis functions, wavelet networks, B-splines, Mamdani and TSK fuzzy systems.
ieee international symposium on intelligent signal processing, | 2011
A. E. Ruano; Cristiano Cabrita; Pedro M. Ferreira
When used for function approximation purposes, neural networks belong to a class of models whose parameters can be separated into linear and nonlinear, according to their influence in the model output. In this work we extend this concept to the case where the training problem is formulated as the minimization of the integral of the squared error, along the input domain. With this approach, the gradient-based non-linear optimization algorithms require the computation of terms that are either dependent only on the model and the input domain, and terms which are the projection of the target function on the basis functions and on their derivatives with respect to the nonlinear parameters. These latter terms can be numerically computed with the data provided. The use of this functional approach brings at least two advantages in comparison with the standard training formulation: firstly, computational complexity savings, as some terms are independent on the size of the data and matrices inverses or pseudo-inverses are avoided; secondly, as the performance surface using this approach is closer to the one obtained with the true (typically unknown) function, the use of gradient-based training algorithms has more chance to find models that produce a better fit to the underlying function.
international symposium on neural networks | 2012
Cristiano Cabrita; A. E. Ruano; Pedro M. Ferreira; László T. Kóczy
When used for function approximation purposes, neural networks belong to a class of models whose parameters can be separated into linear and nonlinear, according to their influence in the model output. This concept of parameter separability can also be applied when the training problem is formulated as the minimization of the integral of the (functional) squared error, over the input domain. Using this approach, the computation of the gradient involves terms that are dependent only on the model and the input domain, and terms which are the projection of the target function on the basis functions and on their derivatives with respect to the nonlinear parameters, over the input domain. This paper extends the application of this formulation to B-splines, describing how the Levenberg-Marquardt method can be applied using this methodology. Simulation examples show that the use of the functional approach obtains important savings in computational complexity and a better approximation over the whole input domain.
ieee international symposium on intelligent signal processing, | 2011
Cristiano Cabrita; A. E. Ruano; Pedro M. Ferreira
This paper investigates the application of a novel approach for the parameter estimation of a Radial Basis Function (RBF) network model. The new concept (denoted as functional training) minimizes the integral of the analytical error between the process output and the model output [1]. In this paper, the analytical expressions needed to use this approach are introduced, both for the back-propagation and the Levenberg-Marquardt algorithms. The results show that the proposed methodology outperforms the standard methods in terms of function approximation, serving as an excellent tool for RBF networks training.
IEEE International Workshop on Intelligent Signal Processing, 2005. | 2005
Cristiano Cabrita; János Botzheim; A. E. Ruano; László T. Kóczy
Current and past research has brought up new views related to the optimization of neural networks. For a fixed structure, second order methods are seen as the most promising. From previous works we have shown how second order methods are of easy applicability to a neural network. Namely, we have proved how the Levenberg-Marquard possesses not only better convergence but how it can assure the convergence to a local minima. However, as any gradient-based method, the results obtained depend on the startup point. In this work, a reformulated evolutionary algorithm - the bacterial programming for Levenberg-Marquardt is proposed, as an heuristic which can be used to determine the most suitable starting points, therefore achieving, in most cases, the global optimum.