Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcelo R.P. Ferreira is active.

Publication


Featured researches published by Marcelo R.P. Ferreira.


Fuzzy Sets and Systems | 2014

Kernel fuzzy c-means with automatic variable weighting

Marcelo R.P. Ferreira; Francisco de A. T. de Carvalho

This paper presents variable-wise kernel fuzzy c-means clustering methods in which dissimilarity measures are obtained as sums of Euclidean distances between patterns and centroids computed individually for each variable by means of kernel functions. The advantage of the proposed approach over the conventional kernel clustering methods is that it allows us to use adaptive distances which change at each algorithm iteration and can either be the same for all clusters or different from one cluster to another. This kind of dissimilarity measure is suitable to learn the weights of the variables during the clustering process, improving the performance of the algorithms. Another advantage of this approach is that it allows the introduction of various fuzzy partition and cluster interpretation tools. Experiments with synthetic and benchmark datasets show the usefulness of the proposed algorithms and the merit of the fuzzy partition and cluster interpretation tools.


Pattern Recognition | 2014

Kernel-based hard clustering methods in the feature space with automatic variable weighting

Marcelo R.P. Ferreira; Francisco de A. T. de Carvalho

Abstract This paper presents variable-wise kernel hard clustering algorithms in the feature space in which dissimilarity measures are obtained as sums of squared distances between patterns and centroids computed individually for each variable by means of kernels. The methods proposed in this paper are supported by the fact that a kernel function can be written as a sum of kernel functions evaluated on each variable separately. The main advantage of this approach is that it allows the use of adaptive distances, which are suitable to learn the weights of the variables on each cluster, providing a better performance. Moreover, various partition and cluster interpretation tools are introduced. Experiments with synthetic and benchmark datasets show the usefulness of the proposed algorithms and the merit of the partition and cluster interpretation tools.


Pattern Recognition | 2016

Kernel-based hard clustering methods with kernelization of the metric and automatic weighting of the variables

Marcelo R.P. Ferreira; Francisco de A. T. de Carvalho; Eduardo C. Simões

This paper presents kernel-based hard clustering methods with kernelization of the metric and automatic weighting of the variables. The proposed methodology is supported by the fact that a kernel function can be written as a sum of kernels evaluated separately on each variable. Thus, in the proposed algorithms dissimilarity measures are obtained as sums of Euclidean distances between patterns and centroids computed individually for each variable by means of kernels. The main advantage of this approach over the conventional approach is that it allows the use of kernelized adaptive distances, which are suitable to learn the weights of the variables dynamically, improving the performance of the algorithms. Moreover, various partition and cluster interpretation tools are introduced. Experiments with a number of benchmark datasets corroborate the usefulness of the proposed algorithms and the merit of the partition and cluster interpretation tools. HighlightsPresents kernel-based clustering methods with automatic weighting of the variables.Kernelized local and global adaptive distances are introduced.The proposed algorithms are suitable to learn the weights of the variables.Partition and cluster interpretation tools are given.Experiments with several benchmark data sets corroborate the proposed methods.


ieee international conference on fuzzy systems | 2012

Kernel fuzzy clustering methods based on local adaptive distances

Marcelo R.P. Ferreira; Francisco de A. T. de Carvalho

This paper presents kernel fuzzy clustering methods in which dissimilarity measures are obtained as sums of squared Euclidean distances between patterns and centroids computed individually for each variable by means of kernel functions. The advantage of the proposed approach over the conventional kernel clustering methods is that it allows us to use adaptive distances which changes at each algorithm iteration and can be different from one cluster to another. This kind of dissimilarity measure is suitable to learn the weights of the variables during the clustering process, improving the performance of the algorithms. Another advantage of this approach is that it allows the introduction of various fuzzy partition and cluster interpretations tools. Experiments with benchmark data sets illustrate the usefulness of our algorithms and the merit of the fuzzy partition and cluster interpretation tools.


Neurocomputing | 2017

A robust regression method based on exponential-type kernel functions

Francisco de A. T. de Carvalho; Eufrásio de Andrade Lima Neto; Marcelo R.P. Ferreira

Robust regression methods appear commonly in practical situations due the presence of outliers. In this paper we propose a robust regression method that penalize bad fitted observations (outliers) through the use of exponential-type kernel functions in the parameter estimator iterative process. Thus, the weights given to each observation are updated at each iteration in order to optimize a suitable objective function. The convergence of the parameter estimation algorithm is guaranteed with a low computational cost. Its performance is sensitive to the choice of the initial values for the vector of parameters of the regression model as well as to the width hyper-parameter estimator of the kernel functions. A simulation study with synthetic data sets revealed that some width hyper-parameter estimators can improve the performance of the proposed approach and that the ordinary least squares (OLS) method is a suitable choice for the initial values for the vector of parameters of the proposed regression method. A comparative study between the proposed method against some classical robust approaches (WLS, M-Estimator, MM-Estimator and L1 regression) and the OLS method is also considered. The performance of these methods are evaluated based on the bias and mean squared error (MSE) of the parameter estimates, considering synthetic data sets with X-space outliers, Y-space outliers and leverage points, different sample sizes and percentage of outliers in a Monte Carlo framework. The results suggest that the proposed approach presents a competitive performance (or best) in outliers scenarios that are comparable to those found in real problems. The proposed method also exhibits a similar performance to the OLS method when no outliers are considered and about half of the computational time if compared with MM-Estimator method. Applications to real data sets corroborates the usefulness of the proposed method.


international symposium on neural networks | 2016

A Gaussian Kernel-based Clustering Algorithm with Automatic Hyper-parameters Computation

Francisco de A. T. de Carvalho; Marcelo R.P. Ferreira; Eduardo C. Simões

The clustering performance of the conventional gaussian kernel based clustering algorithms are very dependent on the estimation of the width hyper-parameter of the gaussian kernel function. Usually this parameter is estimated once and for all. This paper presents a gaussian c-Means with kernelization of the metric which depends on a vector of width hyper-parameters, one for each variable, that are computed automatically. Experiments with data sets of the UCI machine learning repository corroborate the usefulness of the proposed algorithm.


brazilian symposium on neural networks | 2012

Variable-Wise Kernel-Based Clustering Algorithms for Interval-Valued Data

Francisco de A. T. de Carvalho; Gibson B. N. Barbosa; Marcelo R.P. Ferreira

This paper presents partitioning hard kernel clustering algorithms for interval-valued data based on adaptive distances. These adaptive distances are obtained as sums of squared Euclidean distances between interval-valued data computed individually for each interval-valued variable by means of kernel functions. The advantage of the proposed approach over the conventional kernel clustering approaches for interval-valued data is that it allows to learn the relevance weights of the variables during the clustering process, improving the performance of the algorithms. Experiments with real interval-valued data sets show the usefulness of these kernel clustering algorithms.


international conference on artificial neural networks | 2018

Gaussian Kernel-Based Fuzzy Clustering with Automatic Bandwidth Computation

Francisco de A. T. de Carvalho; Lucas V.C. Santana; Marcelo R.P. Ferreira

The conventional Gaussian kernel-based fuzzy c-means clustering algorithm has widely demonstrated its superiority to the conventional fuzzy c-means when the data sets are arbitrarily shaped, and not linearly separable. However, its performance is very dependent on the estimation of the bandwidth parameter of the Gaussian kernel function. Usually this parameter is estimated once and for all. This paper presents a Gaussian fuzzy c-means with kernelization of the metric which depends on a vector of bandwidth parameters, one for each variable, that are computed automatically. Experiments with data sets of the UCI machine learning repository corroborate the usefulness of the proposed algorithm.


Pattern Recognition | 2018

Gaussian kernel c-means hard clustering algorithms with automated computation of the width hyper-parameters

Francisco de A. T. de Carvalho; Eduardo C. Simões; Lucas V.C. Santana; Marcelo R.P. Ferreira

Abstract Conventional Gaussian kernel c-means clustering algorithms are widely used in applications. However, Gaussian kernel functions have an important parameter, the width hyper-parameter, which needs to be tuned. Usually this parameter is tuned once and for all and it is the same for all variables. Thus, implicitly, all the variables are equally rescaled and therefore, they have equal importance on the clustering task. This paper presents Gaussian kernel c-means hard clustering algorithms with automated computation of the width hyper-parameters. In these kernel-based clustering algorithms, the hyper-parameters change at each iteration of the algorithm, they differ from variable to variable and can differ from cluster to cluster. Because each variable is rescaled differently according to its own hyper-parameter, these algorithms can select the important variables in the clustering process. Experiments using synthetic data sets and using UCI machine learning repository data sets corroborate the usefulness of the proposed algorithms.


international symposium on neural networks | 2014

A kernel k-means clustering algorithm based on an adaptive Mahalanobis kernel

Marcelo R.P. Ferreira; Francisco de A. T. de Carvalho

In this paper, a kernel k-means algorithm based on an adaptive Mahalanobis kernel is proposed. This kernel is built based on an adaptive quadratic distance defined by a symmetric positive definite matrix that changes at each algorithm iteration and takes into account the correlations between variables, allowing the discovery of clusters with non-hyperspherical shapes. The effectiveness of the proposed algorithm is demonstrated through experiments with synthetic and benchmark datasets.

Collaboration


Dive into the Marcelo R.P. Ferreira's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eduardo C. Simões

Federal University of Pernambuco

View shared research outputs
Top Co-Authors

Avatar

Lucas V.C. Santana

Federal University of Pernambuco

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge