Edna Lúcia Flôres
Federal University of Uberlandia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edna Lúcia Flôres.
Expert Systems With Applications | 2013
Marcelo Zanchetta do Nascimento; Alessandro Santana Martins; Leandro Alves Neves; Rodrigo Pereira Ramos; Edna Lúcia Flôres; Gilberto Arantes Carrijo
Breast cancer is the most common cancer among women. In CAD systems, several studies have investigated the use of wavelet transform as a multiresolution analysis tool for texture analysis and could be interpreted as inputs to a classifier. In classification, polynomial classifier has been used due to the advantages of providing only one model for optimal separation of classes and to consider this as the solution of the problem. In this paper, a system is proposed for texture analysis and classification of lesions in mammographic images. Multiresolution analysis features were extracted from the region of interest of a given image. These features were computed based on three different wavelet functions, Daubechies 8, Symlet 8 and bi-orthogonal 3.7. For classification, we used the polynomial classification algorithm to define the mammogram images as normal or abnormal. We also made a comparison with other artificial intelligence algorithms (Decision Tree, SVM, K-NN). A Receiver Operating Characteristics (ROC) curve is used to evaluate the performance of the proposed system. Our system is evaluated using 360 digitized mammograms from DDSM database and the result shows that the algorithm has an area under the ROC curve Az of 0.98+/-0.03. The performance of the polynomial classifier has proved to be better in comparison to other classification algorithms.
Computers in Education | 2014
Francisco Ramos de Melo; Edna Lúcia Flôres; Sirlon Diniz de Carvalho; Ricardo Antonio Gonçalves Teixeira; Luis Fernando Batista Loja; Renato de Sousa Gomide
This paper presents an organization model for personalized didactic contents used in individual study environments. For many students the availability of contents in a general form might not be effective. A multilevel structure of concepts is proposed to provide different presentation combinations of the same content. Our work shows that it is possible to personalize the didactic content in order to encourage students, by using proximal learning patterns. These patterns are obtained from the analysis of the actions of students with positive results in the individual content organization. The system uses artificial intelligence techniques to reactively organize and personalize content. Personalization is made possible by means of an artificial neural network that classifies the students profile and assigns it a proximal learning pattern. Expert rules are used to mediate and adjust the contents reactively. Experimental results indicate that the approach is efficient and provides the student a better use of the content with adaptive and reactive personalized presentation. Introduces multilevel structure of contents.A multilevel structure of concepts allow an automatized and personalized content presentation of contents.Introduces proximal learning patterns for personalization.Employ of artificial intelligence in computational organization of didactic contents.Artificial neural network on students classification in proximal learning patterns.
IEEE Latin America Transactions | 2012
Luciano Xavier Medeiros; Gilberto Arantes Carrijo; Edna Lúcia Flôres; Antônio Cláudio Paschoarelli Veiga
Face recognition methods are computationally very expensive and use too much memory and processing time. An example of a method that allocates many computer resources is the Principal Component Analysis (PCA). In order to reduce processing time, was developed in this paper a method using only genetic algorithms to perform face recognition and comparison in the PCA method obtains higher accurate rates and less processing time.
Research on Biomedical Engineering | 2016
Renato de Sousa Gomide; Luiz Fernando Batista Loja; Rodrigo Pinto Lemos; Edna Lúcia Flôres; Francisco Ramos de Melo; Ricardo Antonio Gonçalves Teixeira
Abstract Introduction: Due to the increasing popularization of computers and the internet expansion, Alternative and Augmentative Communication technologies have been employed to restore the ability to communicate of people with aphasia and tetraplegia. Virtual keyboards are one of the most primitive mechanisms for alternatively entering text and play a very important role in accomplishing this task. However, the text entry for this kind of keyboard is much slower than entering information through their physical counterparts. Many techniques and layouts have been proposed to improve the typing performance of virtual keyboards, each one concerning a different issue or solving a specific problem. However, not all of them are suitable to assist seriously people with motor impairment. Methods: In order to develop an assistive virtual keyboard with improved typing performance, we performed a systematic review on scientific databases. Results: We found 250 related papers and 52 of them were selected to compose. After that, we identified eight essentials virtual keyboard features, five methods to optimize data entry performance and five metrics to assess typing performance. Conclusion: Based on this review, we introduce a concept of an assistive, optimized, compact and adaptive virtual keyboard that gathers a set of suitable techniques such as: a new ambiguous keyboard layout, disambiguation algorithms, dynamic scan techniques, static text prediction of letters and words and, finally, the use of phonetic and similarity algorithms to reduce the users typing error rate.
Neural Computing and Applications | 2013
Fabrízzio Alphonsus A. M. N. Soares; Edna Lúcia Flôres; Christian Dias Cabacinha; Gilberto Arantes Carrijo; Antônio Cláudio Paschoarelli Veiga
A very common problem in forestry is the realization of the forest inventory. The forest inventory is very important because it allows the trading of medium- and long-term timber to be extracted. On completion , the inventory is necessary to measure different diameters and total height to calculate their volumes. However, due to the high number of trees and their heights, these measurements are an extremely time consuming and expensive. In this work, a new approach to predict recursively diameters of eucalyptus trees by means of Multilayer Perceptron artificial neural networks is presented. By taking only three diameter measures at the base of the tree, diameters are predicted recursively until they reach the value of 4 cm, with no previous knowledge of total tree height. The training was conducted with only 10% of the total trees planted site, and the remaining 90% of total trees were used for testing. The Smalian method was used with the predicted diameters to calculate merchantable tree volumes. To check the performance of the model, all experiments were compared with the least square polynomial approximator and the diameters and volumes estimates with both methods were compared with the actual values measured. The performance of the proposed model was satisfactory when predicted diameters and volumes are compared to actual ones.
Applied Soft Computing | 2012
Fabrízzio Alphonsus A. M. N. Soares; Edna Lúcia Flôres; Christian Dias Cabacinha; Gilberto Arantes Carrijo; Antônio Cláudio Paschoarelli Veiga
In this work, diameters of Eucalyptus trees are predicted by means of Multilayer Perceptron and Radial Basis Function artificial neural networks. By taking only three diameter measures at the base of the tree, diameters are predicted recursively until they reach the value of minimum merchantable diameter, with no previous knowledge of total tree height. It was considered the diameter top of 4cm outside bark as minimum merchantable diameter. The training was conducted with only 10% of the trees from the total planted site. The Smalian method utilizes the predicted diameters to calculate merchantable tree volumes. The performance of the proposed model was satisfactory when predicted diameters and volumes are compared to actual ones.
IEEE Latin America Transactions | 2011
Luciano Xavier Medeiros; Edna Lúcia Flôres; Gilberto Arantes Carrijo; Antônio Cláudio Paschoarelli Veiga
The field orientation and the binarization in an image are often used in fingerprint identification and authentication, and analysis of textures. The proposed algorithm reuses the additions and multiplications in the calculation of the field orientation using the switching property and uses the Digital Differential Analyzer (DDA) algorithm in the generation of convolution masks for the binarization of fingerprint images. The performance of the processing time and the result of the proposed binarization algorithm compared to the performance of algorithms that use convolutions masks were satisfactory compared to the other algorithms found in literature.
brazilian symposium on computer graphics and image processing | 2006
Sandrerley Ramos Pires; Edna Lúcia Flôres; Célia A. Zorzo Barcelos; Marcos Aurélio Batista
The visualization of image structures in 3D obtained from computerized tomography examinations aids the medical professional in the analysis of images and consequently, provides a more accurate diagnoses. As these images (slices) are spaced apart it becomes necessary to fill in the empty spaces to show the structure in 3D. The use of the virtual slice between the real slices following the restoration is a new approach to realizing slice interpolation aimed at 3D visualization. The goal of this article is to develop a method which produces a virtual slice with few empty regions and, through the use of an in-painting process using transportation and diffusion of information with a partial differential equation, complete the virtual slices. The experimental results, presented by images 2D and 3D show the efficiency of the proposed method
Expert Systems With Applications | 2017
Rodrigo G. Martins; Alessandro Santana Martins; Leandro Alves Neves; Luciano Vieira Lima; Edna Lúcia Flôres; Marcelo Zanchetta do Nascimento
We present an approach to identify the winning team based on the polynomial classifier.The investigated groups were win-draw, win-defeat and draw-defeat.The features were evaluated by machine learning methods and the POL algorithm.The proposed approach achieved an accuracy superior to 96.It proposes the use of the POL algorithm as a method for feature selection. Football is the team sport that mostly attracts great mass audience. Because of the detailed information about all football matches of championships over almost a century, matches build a huge and valuable database to test prediction of matches results. The problem of modeling football data has become increasingly popular in the last years and learning machine have been used to predict football matches results in many studies. Our present work brings a new approach to predict matches results of championships. This approach investigates data of matches in order to predict the results, which are win, draw and defeat. The investigated groups were different type of combinations of two by two pairs, win-draw, win-defeat and draw-defeat, of the possible matches results of each championship. In this study we employed the features obtained by scouts during a football match. The proposed system applies a polynomial algorithm to analyse and define matches results. Some machine-learning algorithms were compared with our approach, which includes experiments with information obtained from the football championships. The association between polynomial algorithm and machine learning techniques allowed a significant increase of the accuracy values. Our polynomial algorithm provided an accuracy superior to 96%, selecting the relevant features from the training and testing set.
IEEE Latin America Transactions | 2015
Júlio César Ferreira; Edna Lúcia Flôres; Gilberto Arantes Carrijo
The Compressive Sensing (CS) allows the acquisition of signals already compressed and the posterior reconstruction with much less number of measures than the minimum required by the Nyquist theorem. A subarea of CS which improves the performance at the reconstruction stage is named Model-Based CS. Some works have been developed within this subarea. However, most of them consider only the noise generated by sparse approximation, disregarding the noise generated by the quantization stage and its influence on efficiency and robustness of CS. The objective of this study is to investigate the influence of the noise generated by the quantization stage in Model-Based CS efficiency for images with different levels of sparsity and different distributions of coefficients in the frequency domain. In this work, the image acquisition stage is implemented using the partial Fourier matrix which results in a vector of measures. Then, different steps of uniform scalar quantization are added to this vector and the image reconstruction stage is performed using the Compressive Sampling Matching Pursuit (CoSaMP) on a quadtree model. PSNR and bits rate (BR) are then used to evaluate the efficiency of CoSaMP with quantization noise. The performance of this proposed Model-Based CS using different quantization steps were slightly better than other studies using the same model in terms of PSNR, but with the advantage of obtaining smaller values of bit rate (BR maior que 2 bpp).