Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jesús Maudes is active.

Publication


Featured researches published by Jesús Maudes.


Sensors | 2015

An SVM-Based Solution for Fault Detection in Wind Turbines

Pedro Santos; Luisa F. Villa; Aníbal Reñones; Andres Bustillo; Jesús Maudes

Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets.


Pattern Recognition Letters | 2008

Boosting recombined weak classifiers

Juan José Rodríguez; Jesús Maudes

Boosting is a set of methods for the construction of classifier ensembles. The differential feature of these methods is that they allow to obtain a strong classifier from the combination of weak classifiers. Therefore, it is possible to use boosting methods with very simple base classifiers. One of the most simple classifiers are decision stumps, decision trees with only one decision node. This work proposes a variant of the most well-known boosting method, AdaBoost. It is based on considering, as the base classifiers for boosting, not only the last weak classifier, but a classifier formed by the last r selected weak classifiers (r is a parameter of the method). If the weak classifiers are decision stumps, the combination of r weak classifiers is a decision tree. The ensembles obtained with the variant are formed by the same number of decision stumps than the original AdaBoost. Hence, the original version and the variant produce classifiers with very similar sizes and computational complexities (for training and classification). The experimental study shows that the variant is clearly beneficial.


Information Fusion | 2012

Random feature weights for decision tree ensemble construction

Jesús Maudes; Juan José Rodríguez; César Ignacio García-Osorio; Nicolás García-Pedrajas

This paper proposes a method for constructing ensembles of decision trees, random feature weights (RFW). The method is similar to Random Forest, they are methods that introduce randomness in the construction method of the decision trees. In Random Forest only a random subset of attributes are considered for each node, but RFW considers all of them. The source of randomness is a weight associated with each attribute. All the nodes in a tree use the same set of random weights but different from the set of weights in other trees. So, the importance given to the attributes will be different in each tree and that will differentiate their construction. The method is compared to Bagging, Random Forest, Random-Subspaces, AdaBoost and MultiBoost, obtaining favourable results for the proposed method, especially when using noisy data sets. RFW can be combined with these methods. Generally, the combination of RFW with other method produces better results than the combined methods. Kappa-error diagrams and Kappa-error movement diagrams are used to analyse the relationship between the accuracies of the base classifiers and their diversity.


Applications of Supervised and Unsupervised Ensemble Methods | 2009

Disturbing Neighbors Diversity for Decision Forests

Jesús Maudes; Juan José Rodríguez; César Ignacio García-Osorio

Ensemble methods take their output from a set of base predictors. The ensemble accuracy depends on two factors: the base classifiers accuracy and their diversity (how different these base classifiers outputs are from each other). An approach for increasing the diversity of the base classifiers is presented in this paper. The method builds some new features to be added to the training dataset of the base classifier. Those new features are computed using a Nearest Neighbor (NN) classifier built from a few randomly selected instances. The NN classifier returns: (i) an indicator pointing the nearest neighbor and, (ii) the class this NN predicts for the instance. We tested this idea using decision trees as base classifiers . An experimental validation on 62 UCI datasets is provided for traditional ensemble methods, showing that ensemble accuracy and base classifiers diversity are usually improved.


Pattern Recognition Letters | 2010

Forests of nested dichotomies

Juan José Rodríguez; César Ignacio García-Osorio; Jesús Maudes

Ensemble methods are often able to generate more accurate classifiers than the individual classifiers. In multiclass problems, it is possible to obtain an ensemble combining binary classifiers. It is sensible to use a multiclass method for constructing the binary classifiers, because the ensemble of binary classifiers can be more accurate than the individual multiclass classifier. Ensemble of nested dichotomies (END) is a method for dealing with multiclass classification problems using binary classifiers. A nested dichotomy organizes the classes in a tree, each internal node has a binary classifier. A set of classes can be organized in different ways in a nested dichotomy. An END is formed by several nested dichotomies. This paper studies the use of this method in conjunction with ensembles of decision trees (forests). Although forests methods are able to deal directly with several classes, their accuracies can be improved if they are used as base classifiers for ensembles of nested dichotomies. Moreover, the accuracies can be improved even more using forests of nested dichotomies, that is, ensemble methods that use as base classifiers a nested dichotomy of decision trees. The improvements over forests methods can be explained by the increased diversity of the base classifiers. The best overall results were obtained using MultiBoost with resampling.


Applied Intelligence | 2011

Random projections for linear SVM ensembles

Jesús Maudes; Juan José Rodríguez; César Ignacio García-Osorio; Carlos Pardo

This paper presents an experimental study using different projection strategies and techniques to improve the performance of Support Vector Machine (SVM) ensembles. The study has been made over 62 UCI datasets using Principal Component Analysis (PCA) and three types of Random Projections (RP), taking into account the size of the projected space and using linear SVMs as base classifiers. Random Projections are also combined with the sparse matrix strategy used by Rotation Forests, which is a method based in projections too. Experiments show that for SVMs ensembles (i) sparse matrix strategy leads to the best results, (ii) results improve when projected space dimension is bigger than the original one, and (iii) Random Projections also contribute to the results enhancement when used instead of PCA. Finally, random projected SVMs are tested as base classifiers of some state of the art ensembles, improving their performance.


international conference on data mining | 2012

Wind turbines fault diagnosis using ensemble classifiers

Pedro Santos; Luisa F. Villa; Aníbal Reñones; Andres Bustillo; Jesús Maudes

Fault diagnosis in machines that work under a wide range of speeds and loads is currently an active area of research. Wind turbines are one of the most recent examples of these machines in industry. Conventional vibration analysis applied to machines throughout their operation is of limited utility when the speed variation is too high. This work proposes an alternative methodology for fault diagnosis in machines: the combination of angular resampling techniques for vibration signal processing and the use of data mining techniques for the classification of the operational state of wind turbines. The methodology has been validated over a test-bed with a large variation of speeds and loads which simulates, on a smaller scale, the real conditions of wind turbines. Over this test-bed two of the most common typologies of faults in wind turbines have been generated: imbalance and misalignment. Several data mining techniques have been used to analyze the dataset obtained by order analysis, having previously processed signals with angular resampling technique. Specifically, the methods used are ensemble classifiers built with Bagging, Adaboost, Geneneral Boosting Projection and Rotation Forest; the best results having been achieved with Adaboost using C4.5 decision trees as base classifiers.


international conference on machine learning and applications | 2012

Disturbing Neighbors Ensembles of Trees for Imbalanced Data

Juan José Rodríguez; José F. Díez-Pastor; Jesús Maudes; César Ignacio García-Osorio

Disturbing Neighbors (DN) is a method for generating classifier ensembles. Moreover, it can be combined with any other ensemble method, generally improving the results. This paper considers the application of these ensembles to imbalanced data: classification problems where the class proportions are significantly different. DN ensembles are compared and combined with Bagging, using three tree methods as base classifiers: conventional decision trees (C4.5), Hellinger distance decision trees (HDDT) -- a method designed for imbalance data -- and model trees (M5P) -- trees with linear models at the leaves -- . The methods are compared using two collections of imbalanced datasets, with 20 and 66 datasets, respectively. The best results are obtained combining Bagging and DN, using conventional decision trees.


international conference industrial engineering other applications applied intelligent systems | 2010

An empirical study of multilayer perceptron ensembles for regression tasks

Carlos Pardo; Juan José Rodríguez; César Ignacio García-Osorio; Jesús Maudes

This work presents an experimental study of ensemble methods for regression, using Multilayer Perceptrons (MLP) as the base method and 61 datasets. The considered ensemble methods are Randomization, Random Subspaces, Bagging, Iterated Bagging and AdaBoost.R2. Surprisingly, because it is in contradiction to previous studies, the best overall results are for Bagging. The cause of this difference can be the base methods, MLP instead of regression or model trees. Diversity-error diagrams are used to analyze the behaviour of the ensemble methods. Compared to Bagging, the additional diversity obtained with other methods do not compensate the increase in the errors of the ensemble members.


international conference on multiple classifier systems | 2007

Cascading for nominal data

Jesús Maudes; Juan José Rodríguez; César Ignacio García-Osorio

In pattern recognition many methods need numbers as inputs. Using nominal datasets with these methods requires to transform such data into numerical. Usually, this transformation consists in encoding nominal attributes into a group of binary attributes (one for each possible nominal value). This approach, however, can be enhanced for certain methods (e.g., those requiring linear separable data representations). In this paper, different alternatives are evaluated for enhancing SVM (Support Vector Machine) accuracy with nominal data. Some of these approaches convert nominal into continuous attributes using distance metrics (i.e., VDM (Value Difference Metric)). Other approaches combine the SVM with other classifier which could work directly with nominal data (i.e., a Decision Tree). An experimental validation over 27 datasets shows that Cascading with an SVM at Level-2 and a Decision Tree at Level-1 is a very interesting solution in comparison with other combinations of these base classifiers, and when compared to VDM.

Collaboration


Dive into the Jesús Maudes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge