Alexandre W. C. Faria
Universidade Federal de Minas Gerais
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexandre W. C. Faria.
brazilian symposium on computer graphics and image processing | 2009
Daniel S. D. Lara; Alexandre W. C. Faria; Arnaldo de Albuquerque Araújo; David Menotti
Nowadays, medical diagnostics using images has a considerable importance in many areas of medicine. It promotes and makes easier the acquisition, transmission and analysis of medical images. The use of digital images for diseases evaluations or diagnostics is still growing up and new application modalities are always appearing. This paper presents a methodology for a semi-automatic segmentation of the coronary artery tree in 2D X-Ray angiographies. It combines a region growing algorithm and a differential geometry approach. The proposed segmentation method identifies about 90% of the main coronary artery tree.
acm symposium on applied computing | 2010
Alexandre W. C. Faria; David Menotti; Daniel S. D. Lara; Gisele L. Pappa; Arnaldo de Albuquerque Araújo
This work proposes a new methodology for automatically validating the internal lighting system of an automotive, i.e., assessing the visual quality of an instrument cluster (IC) from the point of view of the user. Although the evaluation of the visual quality of a component is a subjective matter, it is highly influenced by some photometric features of the component, such as the light intensity distribution. The methodology proposed here uses this last photometric feature to classify regions in images of instrument cluster components as homogenous or not, while also taking into account the user subjective evaluation. In order to achieve that, we acquired a set of 107 IC component images, and preprocessed them. These same components were evaluated by a user to identify their non-homogenous regions. Then, for each component region, we extracted a set of homogeneity descriptors. These descriptors were associated with the results of the user evaluation, and given to two machine learning algorithms. These algorithms were trained to identify a region as homogenous or not, and showed that the proposed methodology obtains precision above 95%.
Engineering Applications of Artificial Intelligence | 2017
Alexandre W. C. Faria; Frederico Coelho; Alisson Marques Silva; Honovan Paz Rocha; G. M. Almeida; André Paim Lemos; Antônio de Pádua Braga
Abstract Multiple Instance Learning (MIL) is a recent paradigm of learning, which is based on the assignment of a single label to a set of instances called bag. A bag is positive if it contains at least one positive instance, and negative otherwise. This work proposes a new algorithm based on likelihood computation by means of Kernel Density Estimation (KDE) called MILKDE. Using the LogitBoost classifier, its performance was compared to that of forty-three MIL algorithms available in the literature using five data sets. Our proposal outperformed all of them for the Elephant (87.40%), Fox (66.80%) and COREL 2000 data sets (77.8%), and achieved competitive results for the MUSK 1 (89.20%) and MUSK 2 (87.50%) data sets, which are comparable to the higher accuracies obtained by other methods for this data sets. Overall results are statistically comparable to those obtained by the most well known methods for MIL described in the literature.
BRICS-CCI-CBIC '13 Proceedings of the 2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence | 2013
Alisson Marques Silva; Alexandre W. C. Faria; Thiago de Souza Rodrigues; Marcelo Azevedo Costa; Antônio de Pádua Braga
Acute leukemia classification into its Myeloid and Lymphoblastic subtypes is usually accomplished according to the morphological appearance of the tumor. Nevertheless, cells from the two subtypes can have similar histopathological appearance, which makes screening procedures very difficult. Correct classification of patients in the initial phases of the disease would allow doctors to properly prescribe cancer treatment. Therefore, the development of alternative methods, to the usual morphological classification, is needed in order to improve classification rates and treatment. This paper is based on the principle that DNA microarray data extracted from tumors contain sufficient information to differentiate leukemia subtypes. The classification task is described as a general pattern recognition problem, requiring initial representation by causal quantitative features, followed by the construction of a classifier. In order to show the validity of our methods, a publicly available dataset of acute leukemia comprising 72 samples with 7,129 features was used. The dataset was split into two subsets: the training dataset with 38 samples and the test dataset with 34 samples. Feature selection methods were applied to the training dataset. The 50 most predictive genes, according to each method, were selected. Artificial Neural Network (ANN) classifiers were developed to compare the feature selection methods. Among the 50 genes selected using the best classifier, 21 are consistent with previous work and 4 additional ones are clearly related to tumor molecular processes. The remaining 25 selected genes were able to classify the test dataset, correctly, using the ANN.
ChemBioChem | 2016
Alexandre W. C. Faria; Cristiano Leite Castro; Antônio de Pádua Braga
In this paper, a new oversampling method is proposed to improve the representativeness of minority groups in the training data set. Our methodology creates artificial (synthetic) examples on basis the spatial distribution of the classes. The original data are expanded (duplicated) along the lines connecting the class centroid and each minority pattern under consideration. In contrast to other methods known in literature (as SMOTE), our geometric approach for data generation has the advantage of being accomplished in a straightforward way, i.e., without the need of the definition of parameters by the user. Experiments conducted with real and synthetic data point out that the our solution to the class imbalance problem is able to improve the number of correct minority classifications and the balance between the class accuracies.
9. Congresso Brasileiro de Redes Neurais | 2016
Alexandre W. C. Faria; David Menotti; Daniel S. D. Lara; Arnaldo de Albuquerque Araújo
O presente trabalho propoe uma metodologia para validacao automatica do sistema de ilumina cao interna de um veiculo, avaliando a qualidade visual de um painel de instrumentos veicular, baseada na percepcao de seres humanos. Embora a avaliacao da qualidade visual seja uma questao subjetiva, ela e inuenciada por algumas caracteristicas fotometricas da iluminacao do instrumento, como por exemplo, a distribuicao da intensidade luminosa. Neste trabalho e desenvolvida uma metodologia visando identicar e quanticar regioes nao homogeneas na distribuicao da iluminacao de um instrumento, a partir de uma imagem digital. A m de realizar tal tarefa, foram capturadas 107 imagens de instrumentos (velocimetros, indicacao de rotacoes por minuto do motor (RPM), indicadores de velocidade e temperatura). Estes instrumentos foram avaliados por seres humanos a m de identicar as regioes que eram homogeneas e nao homogeneas. Entao, para cada regiao encontrada no instrumento, foram extraidos um conjunto de descritores de homogeneidade. E proposto tambem neste trabalho, um descritor relacional com objetivo de entender a inuencia da homogeneidade de uma regiao em relacao as outras regioes que compoem instrumento. Estes descritores foram associados as rotulacoes efetuadas pelos seres humanos, e assim fornecidas a dois algoritmos de Aprendizado de Maquina (Redes Neurais Articiais - RNA e Maquinas Vetores de Suporte - SVM). Estes algoritmos foram treinados para classicar as regioes como homogeneas ou nao. O trabalho apresenta tambem uma analise criteriosa sobre as avaliacoes subjetivas realizadas pelos usuarios e especialista. Apos analise dos resultados a metodologia apresentou uma precisao superior a 94%, tanto para classicacao das regioes quanto para a classicacao nal do instrumento.
Revista De Informática Teórica E Aplicada | 2011
Alexandre W. C. Faria; Leandro Pfleger de Aguiar; Daniel S. D. Lara; Antonio Alfredo Ferreira Loureiro
Historically, energy management in computer science has been treated as an activity predominantly of hardware optimization. A great part of the effort on the area, even nowadays, is concerned in components activation, deactivation or resources scheduling to provide, as a final result, the reduction of total power consumption. This work is focused on the power consumption subject under the developer point of view, using a reliable power measurement framework, to validate the literature programming premises about programming options, as, for example, multiplication operations are high consuming in power energy. Besides some elementary operations and authors suggestions about alternatives for power consumption reduction on the programming stage, it was also compared two well used and known algorithms for big numbers multiplication, Karatsuba and Toom-Cook. The results lead to conclusions that would help the developer, in programming stage, to choose, in some cases, the best technique for reduction of power consumption, speed up the software developed, or take some decisions to limit the final software to be under some maximum power.
International Journal of Computer Science and Information Technology | 2013
Daniel S. D. Lara; Alexandre W. C. Faria; Arnaldo de Albuquerque Araújo; David Menotti
trust security and privacy in computing and communications | 2011
Alexandre W. C. Faria; Leandro Pfleger de Aguiar; Daniel S. D. Lara; Antonio Alfredo Ferreira Loureiro
Expert Systems With Applications | 2012
Alexandre W. C. Faria; David Menotti; Gisele L. Pappa; Daniel S. D. Lara; A. De Albuquerque Araujo