Carlos Javier Mantas
University of Granada
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carlos Javier Mantas.
IEEE Transactions on Neural Networks | 2002
Juan Luis Castro; Carlos Javier Mantas; José Manuel Benítez
This paper presents an extension of the method presented by Benitez et al (1997) for extracting fuzzy rules from an artificial neural network (ANN) that express exactly its behavior. The extraction process provides an interpretation of the ANN in terms of fuzzy rules. The fuzzy rules presented are in accordance with the domain of the input variables. These rules use a new operator in the antecedent. The properties and intuitive meaning of this operator are studied. Next, the role of the biases in the fuzzy rule-based systems is analyzed. Several examples are presented to comment on the obtained fuzzy rule-based systems. Finally, the interpretation of ANNs with two or more hidden layers is also studied.
Expert Systems With Applications | 2014
Joaquín Abellán; Carlos Javier Mantas
Previous studies about ensembles of classifiers for bankruptcy prediction and credit scoring have been presented. In these studies, different ensemble schemes for complex classifiers were applied, and the best results were obtained using the Random Subspace method. The Bagging scheme was one of the ensemble methods used in the comparison. However, it was not correctly used. It is very important to use this ensemble scheme on weak and unstable classifiers for producing diversity in the combination. In order to improve the comparison, Bagging scheme on several decision trees models is applied to bankruptcy prediction and credit scoring. Decision trees encourage diversity for the combination of classifiers. Finally, an experimental study shows that Bagging scheme on decision trees present the best results for bankruptcy prediction and credit scoring.
Fuzzy Sets and Systems | 2007
Juan Luis Castro; L. D. Flores-Hidalgo; Carlos Javier Mantas; José Manuel Puche
The relationship between support vector machines (SVMs) and Takagi-Sugeno-Kang (TSK) fuzzy systems is shown. An exact representation of SVMs as TSK fuzzy systems is given for every used kernel function. Restricted methods to extract rules from SVMs have been previously published. Their limitations are surpassed with the presented extraction method. The behavior of SVMs is explained by means of fuzzy logic and the interpretability of the system is improved by introducing the @l-fuzzy rule-based system (@l-FRBS). The @l-FRBS exactly approximates the SVMs decision boundary and its rules and membership functions are very simple, aggregating the antecedents with uninorms as compensation operators. The rules of the @l-FRBS are limited to two and the number of fuzzy propositions in each rule only depends on the cardinality of the set of support vectors. For that reason, the @l-FRBS overcomes the course of dimensionality and problems with high-dimensional data sets are easily solved with the @l-FRBS.
International Journal of Approximate Reasoning | 2006
Carlos Javier Mantas; José Manuel Puche; José M. Mantas
A method to extract a fuzzy rule based system from a trained artificial neural network for classification is presented. The fuzzy system obtained is equivalent to the corresponding neural network. In the antecedents of the fuzzy rules, it uses the similarity between the input datum and the weight vectors. This implies rules highly understandable. Thus, both the fuzzy system and a simple analysis of the weight vectors are enough to discern the hidden knowledge learnt by the neural network. Several classification problems are presented to illustrate this method of knowledge discovery by using artificial neural networks.
Expert Systems With Applications | 2014
Carlos Javier Mantas; Joaquín Abellán
In the area of classification, C4.5 is a known algorithm widely used to design decision trees. In this algorithm, a pruning process is carried out to solve the problem of the over-fitting. A modification of C4.5, called Credal-C4.5, is presented in this paper. This new procedure uses a mathematical theory based on imprecise probabilities, and uncertainty measures. In this way, Credal-C4.5 estimates the probabilities of the features and the class variable by using imprecise probabilities. Besides it uses a new split criterion, called Imprecise Information Gain Ratio, applying uncertainty measures on convex sets of probability distributions (credal sets). In this manner, Credal-C4.5 builds trees for solving classification problems assuming that the training set is not fully reliable. We carried out several experimental studies comparing this new procedure with other ones and we obtain the following principal conclusion: in domains of class noise, Credal-C4.5 obtains smaller trees and better performance than classic C4.5.
IEEE Transactions on Fuzzy Systems | 2008
Carlos Javier Mantas; José Manuel Puche
In this paper, the functional equivalence between the action of a multilayered feed-forward artificial neural network (NN) and the performance of a system based on zero-order TSK fuzzy rules is proven. The resulting zero-order TSK fuzzy systems have the two following features: (A) the product t-norm is used to add IF-part fuzzy propositions of the obtained rules and (B) their inputs are the same as the initial neural networkNN ones. This fact makes us gain an understanding of the ANN-embedded knowledge. Besides, it allows us to simplify the architecture of a network through the reduction of fuzzy propositions in its equivalent zero-order TSK system. These advantages are the result of applying fuzzy system area properties on the neural networkNN area. They are illustrated with several examples.
Expert Systems With Applications | 2014
Carlos Javier Mantas; Joaquín Abellán
An analysis of a procedure to build decision trees based on imprecise probabilities and uncertainty measures, called CDT, is presented. We compare this procedure with the classic ones based on the Shannons entropy for precise probabilities. We found that the handling of the imprecision is a key part of obtaining improvements in the methods performance, as it has been showed for class noise problems in classification. We present a new procedure for building decision trees extending the imprecision in the CDTs procedure for processing all the input variables. We show, via an experimental study on data set with general noise (noise in all the input variables), that this new procedure builds smaller trees and gives better results than the original CDT and the classic decision trees.
joint ifsa world congress and nafips international conference | 2001
José Manuel Benítez; Juan Luis Castro; Carlos Javier Mantas; Fernando Rojas
A method for feature selection based on a combination of artificial neural network and fuzzy techniques is presented. The procedure produces a ranking of features according to their relevance to the network. This ranking is used to perform a backward selection by successively removing input nodes in a network trained using the complete set of features as inputs. Irrelevant input units and their connections are pruned. The remaining biases are adjusted in such a way that the overall change in the behavior learnt by the network is under control. When the problem is too hard to be modeled by a single network, several of them are used to generate different rankings which are aggregated by using a fuzzy logic operator. The proposed method is applied on a number of real-world classification problems. Empirical results show that the feature selection enables the network to improve its generalization ability. This procedure also offers several advantages with respect to other feature selection methods, especially improved efficiency.
soft computing | 2007
Carlos Javier Mantas
Multilayered feedforward artificial neural networks (ANNs) are black boxes. Several methods have been published to extract a fuzzy system from a network, where the input–output mapping of the fuzzy system is equivalent to the mapping of the ANN. These methods are generalized by means of a new fuzzy aggregation operator. It is defined by using the activation function of a network. This fact lets to choose among several standard aggregation operators. A method to extract fuzzy rules from ANNs is presented by using this new operator. The insertion of fuzzy knowledge with linguistic hedges into an ANN is also defined thanks to this operator.
IEEE Transactions on Neural Networks | 2000
Juan Luis Castro; Miguel Delgado; Carlos Javier Mantas
This article presents a machine learning method for solving classification and approximation problems. This method uses the divide-and-conquer algorithm design technique (taken from machine learning models based on a tree), with the aim of achieving design ease and good results on the training examples and allows semi-global actions on its computational elements (a feature taken from neural networks), with the aim of attaining good generalization and good behavior in the presence of noise in training examples. Finally, some results obtained after solving several problems with a particular implementation of SEPARATE are presented together with their analysis.